Kubernetes cluster on a RPi (part 1)

This post, and subsequent ones in the series are going to be a bit of a learning journey, as I am using it to document my learning around kubernetes. Obviously k8s (as the cool kids call it) has been around for a while, 6 year anniversary today, 11th June 2020, and I get the concept of this and containers in theory, but I want to make that a reality, so have decided to build my own implementation without spending a fortune.

As a fan of the Raspberry Pi, this was my go to hardware, so purchased a couple of RPi4 4GB (the 8GB version was released a few days later, doh!).
I’ve also wanted to skill up around the networking side of things for a while, as it is a bit of black art, past IP addressing and routing, so bought myself a managed switch, with POE, to power the Pi’s with a POE Hat (when they come back in stock!)

My basic setup – no fancy cases (yet)

I’m going to be following this existing tutorial, but with a few changes, as I will be using the “newly” released 64bit Raspberry Pi OS. I also want to incorporate my slightly different networking setup.

I want to use the wireless adapter on the RPis for management and external access, and I want to use a segregated VLAN on the switch which the physical RPi ethernet ports are plumbed into for the k8s master <–> node communication.

The usual Pi set up routine, such a writing the OS, configuring on the network, updating etc was followed, along with using raspi-config to set a proper hostname on each of the Pis.

I have given my Master and Worker nodes static ethernet port IP’s by editing the /etc/dhcpcd.conf file

interface eth0
static ip_address=10.10.10.2/24

I found that trying to restart the dhcpcd service caused the Pi to hang, so a reboot was easier to bring up the interface.

The original tutorial didn’t mention disabling swap, so I did that with the command:

sudo systemctl stop dphys-swapfile.service
sudo systemctl disable dphys-swapfile.service

After installing docker, and carrying out the remediation to remove the warnings. I still had some warnings relating to cfs support. I found from another post that these can be ignored.

As I want all k8s connectivity to go through the physical ethernet ports, I added a parameter to the kubeadm init command in the tutorial, adding –apiserver-advertise-address:

 sudo kubeadm init --token=${TOKEN} --kubernetes-version=v1.18.2 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.10.10.2

After installing flannel as the CNI pod network the master node goes ready

Some of the commands I learnt so far:

kubectl get nodes (shows all nodes)
kubectl get deploy --all-namespaces (shows all deployments)
kubectl get po --all-namespaces (shows all pods)

Seeing what pods are running

I could then join my worker node to the cluster, and wait a few seconds for it to become ready.

Ta-da…a basic Kubernetes cluster running and ready. Next to run something on it.

Related Post

Leave a Reply