(Part 1/3) Raspberry Pi Homelab with Kubernetes

In the Beginning

I’ve been running Pi-Hole on a Raspberry Pi 3b wired into my wifi router for most of last year and its been great. So when the new Raspberry Pi 4 came out, I picked one up. It sits on my desk, mostly for easy access to its USB ports, which allows me to hook it up to some of my esp32 devkits and push micropython code onto them. The pi4 has been a great general purpose development environment.

Recently, I’ve been wanting to write some trivial web endpoints for “internal” dashboards and such for the house. Plus, its a great excuse to learn Golang. In this day and age, clearly a dockerized golang dev environment is the way to go. Have I truly built something, if my dev environment isn’t dockerized?

So we’re agreed that dockerizing my dev environment is the way to go. Surely if my dev environment is dockerized, how much more should my app deployments use containers? Nothing less will do. But now I need a way to deploy and orchestrate said containers? I know! I should run a kubernetes cluster across my two Pi’s! Might as well run the Pi-hole on it as well, how hard can it be?

So that is what I spent the better part of last week figuring out.

Mandatory xkcd

Mandatory xkcd

This blog post walks through what I did, and how I did it, It’s purpose is two-fold -

  1. It is a map to allow me to retrace my steps if I need to
  2. Perhaps it may prove of (dubious) use to you.

Starting Off.

So, both my Pi’s run Ubuntu server. I decided I should start from scratch, and flashed the latest ubuntu server image onto the SD cards for both Pi’s. Being a very optimistic person by nature, I expected to have Pi-hole back up and running on this new Kubernetes cluster within a day, and a day of unfiltered ads was a small price to pay for the experience. Alas, it was close to a week before I had Pi-Hole working on my network again, but yay! you get to learn from my experience!

I didn’t have much of an understanding of Kubernetes components going into this project - but hey, that’s what these projects are meant to give you, and boy, did it. So fret not if you don’t understand some of these terms, the kubernetes documentation pages are great!

Disclaimer

None of this work is original. I cobbled together guides and walkthroughts from various sources to get to this frankenstein’s monster of a post that you see here. You can find links to the sources I used at the end of this page.

Getting the Pre-requisites in Place

The first step to this journey involves making sure you have the required packages on all your machines. In my case, this was two machines - the Pi4 (called Terminus) and the Pi3b (called Trantor). You need docker, kubelet, kubeadm and kubectl. You want this installed on all your nodes. Terminus will be my master node, Trantor will be my worker. Asimov fans may protest that the Second Foundation was on Trantor after all, but let’s go with this for now. Setting static IPs on the master and workers on your cluster also helps, but I won’t cover that here.

Update apt repos and packages.

sudo apt-get update
sudo apt-get upgrade

Install Docker using the Convenience script. Yes, shame on you for blindly running a script you downloaded from the internet.

curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh

Let’s make sure our non-root user can use Docker.

sudo usermod -aG docker $USER

Now there’s some additional setup that needs to be done in order to get Kubernetes to work on the Raspberry Pi - specifically enabling cgroups. You can do this by editing the file /boot/firmware/cmdline.txt and adding the following options at the end.

cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1 swapaccount=1

You’ll need to reboot the Pi after this.

Add the K8s apt repo.

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

You’ll notice we’re using kubernetes-xenial which was the latest release at the time of writing this. Update this to the latest release available if you need to.

Let’s install our main K8s helpers. We’ll also make sure they’re excluded freom any system upgrades. As the kubernetes documentation says, “kubeadm and kubectl require special attention to upgrade.”

sudo apt update && sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Hello Cluster!

Create the cluster by running the following commands on the master node only. Pay special attention to the --pod-network-cidr parameter. You’ll need this CIDR range later on when setting up Flannel.

# Create the bootstrap token
TOKEN=$(sudo kubeadm token generate)
sudo kubeadm init --token=${TOKEN} --pod-network-cidr=10.10.0.0/16

Congratulations. You are now the proud owner of a bare-metal kubernetes cluster (with one node). Admire the output, and consider running the commands they ask you to. For example, you’ll need a config file in $HOME/.kube/config if you want kubectl to work without too much hassle. Also make special note of the kubeadm join command as well, you’ll need to run that on your worker nodes.

These are the commands that the output from the previous step suggest you to run. Run this on the master node, in case that isn’t clear.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Go run the kubeadm join commands on all the worker nodes you’d like to dedicate to this cluster. I’ll wait.

Sidebar: Kubernetes Manifests

Going through this guide, you’ll quickly become familiar with the command kubectl apply. This command “applies a configuration to a resource” in kubernetes parlance and is typically provided a YAML “manifest” file as parameter.

Flannel

So now we have a cluster, but technically Kubernetes doesn’t know how to handle networking between any pods that are scheduled on this cluster - atleast, that’s what I’ve understood. This is why you need an addon like Flannel to handle this for you. You can find a full list of Networking and Network Policy Addons here. But in case it isn’t clear yet, we’ll use Flannel.

If you’ve specified a pod-network-cidr parameter when creating your cluster, you’ll need to edit the Flannel manifest with this CIDR before you apply it to the cluster.

Let’s download the default flannel manifest

curl https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml --output kube-flannel-updated.yml

Open it the file up in your favourite editor, and find the key net-conf.json. Update the CIDR given there with the right CIDR for your cluster. Once done, apply the manifest like so.

kubectl apply -f ./kube-flannel-updated.yml

To check if this worked, run the following command to get all pods running on your cluster.

kubectl get pods -A

You should see core-dns and kube-flannel pods running like so. I have two pods for each because I have two nodes in my cluster.

NAMESPACE              NAME                                          READY   STATUS    RESTARTS   AGE
kube-system            coredns-f9fd979d6-h9m47                       1/1     Running   1          3d2h
kube-system            coredns-f9fd979d6-m5jrd                       1/1     Running   1          3d2h
kube-system            kube-flannel-ds-2ngxd                         1/1     Running   1          3d2h
kube-system            kube-flannel-ds-kqflv                         1/1     Running   1          3d2h

Sidebar: Kubernetes Namespaces

Namespaces are used to isolate pods and services running on the same cluster. My data engineer brain thinks of the cluster as a database and namespaces as schemas, but I could be mistaken and maybe should be thinking of the cluster as a single database install, and the namespaces as individual databases. Or maybe, this is entirely the wrong abstraction to bring in. Scratch all of this, let’s move on.

Baby’s first steps - Kubernetes Dashboard

We now have a cluster, that knows how to handle pod networking. Let’s run something on it! How about the Kubernetes dashboard, so that you have something pretty to show your non-technically inclined significant other as the output of your hard work?

Behold! The fruits of your labour!

Behold! The fruits of your labour!

We’ll create a namespace to hold everything related to the Kubernetes Dashboard. I’m calling the namespace - kubernetes-dashboard. Very imaginative, no?

kubectl create namespace kubernetes-dashboard

We’ll now download the manifest file for Kubernetes dashboard, because we need to make some changes.

curl https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml --output kubernetes-dashboard.yaml

I spent a few days trying to figure out why the manifest did not work out of the box, it kept failing when trying to pull the docker image. I worked around this by doing two things -

  1. Ran docker pull kubernetesui/dashboard:v2.0.0 to cache a local copy of the docker image.
  2. Commented out the imagePullPolicy: Always in the manifest file under the kubernetes-dashboard deployment block.

For the more K8s experienced among you, you may be wondering why I did not try using the Helm chart - I did. Kubernetes-dashboard needs to run two services - dashboard-metrics-scraper and kubernetes-dashboard. The Helm chart only seemed to bring up kubernetes-dashboard. I’m sure I must be doing something wrong, but at this point my patience was wearing thin and I just wanted to get on with it.

Ok, so now we have an edited manifest, let’s apply it.

kubectl apply -f kubernetes-dashboard.yaml

It takes a little bit of time for the dashboard to come up. You can amuse yourself by looking at the pods as they spin up as follows -

watch kubectl get pods -n kubernetes-dashboard

You can get details on a specific pod by running -

kubectl describe pod <pod_name> -n kubernetes-dashboard

You can also tail logs on a specific pod by running -

kubectl -n kubernetes-dashboard logs <pod_name> -f

Once you see the dashbaord services up and running, let’s figure out how we actually get access to the dashboard UI.

We’ll assume that you haven’t configured kubectl on your local machine and are instead, running all these commands from your (headless) raspberry pi.

Run kubectl proxy first. This exposes the cluster API server over HTTP to the host on which it is run. The output of this command should provide a port on which the API server is exposed, typically 8001.

kubectl proxy

Now let’s setup local port forwarding on your laptop or computer so that you can access the dashboard web UI from your browser. You want traffic to port 8001 of your local machine to be forwarded to port 8001 of the master node (where you’re running kubectl proxy).

ssh -L 8001:127.0.0.1:8001 <my_user>@<master_node>

Open up your favourite browser and navigate to http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/. You should see a login page. We need a token to actually get access to the admin dashboard, and tokens are associated with Service Accounts on the cluster. Let’s create our first Service Account. Create a manifest file called admin-user.yaml with the following contents -

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

Apply it with

kubectl apply -f ./admin-user.yaml -n kubernetes-dashbaord

Now, to bind this user to a ClusterRole so that the user has permissions to actually see or do something on the dashboard, create a file called cluster-role-binding.yaml with the following content -

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

Apply it with -

kubectl apply -f ./cluster-role-binding.yaml -n kubernetes-dashboard

Now, let’s find the name of the Kubernetes secret which holds the token for this user.

kubectl get serviceaccounts admin-user -n kubernetes-dashboard -o yaml

Look for the section called secrets. You should find a key called name with a value like admin-user-token-XXXXX or similar. Run the following command to get the actual token.

kubectl describe secret admin-user-token-xxxxx -n kubernetes-dashboard

Copy the token value and paste that into the login page for the Dashboard on your browser. You should be in. Remember to terminate your kubectl proxy and local port forwarding when you’re done.

Congratulations! You have deployed your first workload on your homelab kubernetes on raspberry pi cluster!

Next Steps

You’ve created a bare-metal kubernetes cluster, setup container networking using Flannel, and deployed Kubernetes Dashboard on it.

In part 2, we’ll setup network load balancer for our bare-metal cluster (MetalLB) and figure out how to expose the dashboard as a loadbalanced service with an external IP.

In Part 3, we’ll actually figure out how to run Pi-Hole on this cluster (including enabling Pi-hole DHCP)!

References

Edit: Discussion on Hacker News

here

comments powered by Disqus