A Basic Guide to Building An HP Mini k8s Cluster

For all you homelabbers who haven't started with kubernetes, here's the time! Hopefully, this will help make a complicated task a bit easier. Also, this doesn't have to be done with HP Minis (that was simply the hardware I used), it can be done with VMs, old computers and even Raspberry Pis (might be software limitations within these steps).

This will be the start of my third cluster rebuild (simply to keep my mind fresh and practice). This example will show creating a HP mini control plane and a VM as the first worker node. It will be repeatable for other HP minis, I am simply using a VM to try something new myself!

Let's Prepare the Environments

First, give the machines fun names that represent what they are. I have named my systems on space, ninja turtles and ants. Let's use bees and refer to these as "queen" and "worker". The "queen" will be the control-plane and the "worker" will be a random node. As you add more nodes, you will be adding more "worker"s.

These systems will both be running ubuntu server 24.04.2 as a baseline. They are also both fresh installs. So we...

Start with the standard updating and upgrading your packages.

sudo apt-get update && sudo apt-get upgrade -y

Disable swap on all the nodes (When referring to nodes, that's all machines unless otherwise specified). This may not be required anymore, but, there is a long history here.

Edit /etc/fstab and comment out the swap line, so it looks similar to below.

sudo vi /etc/fstab
# changed line should look similar to this.
#/swap.img  none    swap    sw  0   0

Also, run swapoff -a to disable swap without rebooting.

sudo swapoff -a

Since we are using ubuntu, we will install the required repositories for the kubernetes packages in Debian-based distributions.

sudo apt-get install -y apt-transport-https ca-certificates curl gnupg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
sudo chmod 644 /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo chmod 644 /etc/apt/sources.list.d/kubernetes.list

This is their "new" way of managing packages. This will also pin the repository to a specific minor version, so be mindful of that going forward if you are looking to upgrade!

We will now add the repositories for Docker to install the containerd.io package.

sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Update the apt repositories again!

# Update to include the new repositories  
sudo apt-get update

Now let's install the containerd.io package.

sudo apt-get install containerd.io

Optional: Note the version if you do want to pin the package for future upgrades. At the time I wrote this containerd.io was at 1.7.27.

Begin with the k8s Bits

We will install different packages on the queen and worker.

For the queen, we will be installing the additional package kubectl to orcestrate the cluster.

sudo apt-get install -y kubelet kubeadm kubectl

Optional: This is the pinning part, or "hold" in the context of an apt package. This will prevent you from upgrading the package in the event a new one is released. I would suggest this for the more important clusters to have better control of your upgrade process.

sudo apt-mark hold kubelet kubeadm kubectl containerd.io
Don't forget to include containerd.io

For the worker, do the same, minus the kubectl package.

sudo apt-get install -y kubelet kubeadm
# Optional Step
sudo apt-mark hold kubelet kubeadm containerd.io

Back to running commands on both nodes, here are some additional configurations to optimize kernel operations for containerized environments. Create the file /etc/modules-load.d/containerd.conf with the following contents.

overlay
br_netfilter

Now, let's load the modules to the kernel.

sudo modprobe overlay
sudo modprobe br_netfilter

The next step is setting up some networking. Create the file /etc/sysctl.d/kubernetes.conf and add the following.

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

Finally, run the following command to reload the kernel.

sudo sysctl --system

(Maybe, Optional) Adding CIFS Support

This step may be optional, if you do want your cluster to access SMB shares, you want ot do this step as well.

sudo apt-get install cifs-utils

VOLUME_PLUGIN_DIR="/usr/libexec/kubernetes/kubelet-plugins/volume/exec"

sudo mkdir -p "$VOLUME_PLUGIN_DIR/fstab~cifs"

cd "$VOLUME_PLUGIN_DIR/fstab~cifs"

sudo curl -L -O https://raw.githubusercontent.com/fstab/cifs/master/cifs

sudo chmod 755 cifs

Setup containerd

Populate a default config.toml for containerd.

sudo containerd config default | sudo tee /etc/containerd/config.toml

We then allow containerd to be utilized by the SystemdCgroup. Find the appropriate section in /etc/containerd/config.toml and change it to the following:

```
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true
```

Restart the containerd service

sudo systemctl restart containerd

Let's Start a Cluster!

Head over to queen and run a kubeadm init command. Please insert the IP of the queen machine in the command. That will the be "--apiserver-advertise-address":

sudo kubeadm init --pod-network-cidr=10.15.0.0/16 --apiserver-advertise-address= --ignore-preflight-errors=all

You will also be provided with a join command, cut that and keep it handy to add any worker nodes. The output will look something like this:

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join :6443 --token  \
    --discovery-token-ca-cert-hash sha256:

Access to Cluster

We are now going to give the queen the ability to connect to the cluster using kubectl. On queen run the following commands.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config

Setup the Cluster Networking

All the commands in this section will be run on queen.

We will configure flannel for the "layer 3 network fabric".

Download it to queen, modify the network parameter and deploy to the cluster.

wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

vi kube-flannel.yml

# Modify the following parameters, ensure the --pod-network-cidr from the kubeadm command matches the network below
```
  net-conf.json: |
    {
      "Network": "10.15.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
```

# Once saved, apply the yml
kubectl apply -f kube-flannel.yml

We are now going to add metallb. This is to offer load balancing functionality for the bare metal cluster that is being built. We will start by enabling strictARP.

kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl apply -f - -n kube-system

We will now install the most recent version (at the time - check the site for the most recent edition ).

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.9/config/manifests/metallb-native.yaml

We will then create a required secret for metallb.

kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

Let's Get the Worker to Join

Remember that kubeadm init output. This is when we add it. Run that command on the worker

At this point, you should see any output similar to this.

NAMESPACE        NAME                           READY   STATUS    RESTARTS        AGE
kube-flannel     kube-flannel-ds-kw5s6          1/1     Running   0               64s
kube-flannel     kube-flannel-ds-vl84d          1/1     Running   1 (3m27s ago)   36m
kube-system      coredns-674b8bbfcf-frw88       1/1     Running   1 (3m27s ago)   81m
kube-system      coredns-674b8bbfcf-n6r2h       1/1     Running   1 (3m27s ago)   81m
kube-system      etcd-nest                      1/1     Running   1 (3m27s ago)   81m
kube-system      kube-apiserver-nest            1/1     Running   1 (3m27s ago)   81m
kube-system      kube-controller-manager-nest   1/1     Running   1 (3m27s ago)   81m
kube-system      kube-proxy-4lnmf               1/1     Running   0               64s
kube-system      kube-proxy-9nkxs               1/1     Running   1 (3m27s ago)   81m
kube-system      kube-scheduler-nest            1/1     Running   1 (3m27s ago)   81m
metallb-system   controller-bb5f47665-r2w8g     1/1     Running   0               6m20s
metallb-system   speaker-ll7pj                  1/1     Running   0               49s
metallb-system   speaker-zgm26                  1/1     Running   2 (3m16s ago)   19m

The services might be still pending, or in crash loops. That is fine, it will be resolved shortly, in theory. IE. The metallb-system controller will try and run on a worker.

The Network Pool

Next, we set up the IPAddressPool for the service IP address that will be available for utilization. Create a yml titled metallb-pool.yml with the following contents:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: default
  namespace: metallb-system
spec:
  addresses:
  - 192.168.1.50-192.168.1.200

---

apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: default
  namespace: metallb-system

Apply the yml file.

kubectl apply -f metallb-pool.yml

Congratulations!

If all went well, you now have the start of a kubernetes adventure. You can find a test deployment of a proxy, or different container. It should run fine. I did test this cluster with a few secrets and a deployment.

I hope this was helpful, please feel free to contact me if you have any concerns, suggestions or feedback. Thanks for reading!

BONUS: Installing Helm

Helm has become a very popular "package manager for kubernetes" (that's their tagline on their main page). These are literally the commands from their site. It is a super simple installation to help get people started.

curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm