Two-Host-Long Story of Setting Up Your Kubernetes Lab
Introduction
Last week we took a dive into the kubernetes documentation to better understand what will we be dealing with. Now it’s time to set up our lab environment. First I’ll give you rough explanation of my hardware setup, without in depth instruction, as this will differ for everybody, and then we will go step by step into configuring the control plane and joining additional nodes to the cluster.
If you want to follow the instruction you’ll need at least 2 debian hosts (can be other distribution, but some adjustments might be required) connected by the same network.
I’ll be using two Thinkcentre M715q Tiny minicomputers (Ryzen 5 Pro 2400GE, 16gb RAM)— they are not top notch home server hardware, but I caught them really cheap and should definitely suffice for our purposes.
Here’s quick overview of how I prepared my hosts.
Disclaimer: If you don’t want to configure cluster on multiple hosts, but still want to follow our journey check out minikube, it’s designed to quickly spin up local cluster, which for our purposes will be more than enough.
Step 1: Network Separation
Kubernetes is NOT an operating system. We first need to set up our hosts to be able to proceed with installation. But before that I needed to solve one issue —as I knew our journey together would for sure generate some insecure states inside our cluster in the future — after all we are here to learn. I for sure didn’t want any of that inside my network — with access to all my other devices. To solve that I decided to create separate network for the lab, so that there’s no communication between lab and rest of my network. Better safe than sorry.
Step 2: Hypervisor and Virtual Machines
Taught by experience I know that it’s always good to have some sort of snapshot/backup when working with complex environments — I strongly believe we’ll use them on more than one occasion, and having them will make the whole journey so much more pleasant. Additionally they’ll enable us to take more risks, as we’ll know if something goes wrong we can always bring it back to working state. Now I’m not an expert on backing up linux machines, but I believe it could get troublesome. But there’s really easy solution — type 0 hypervisor. It’s bare metal solution (meaning we install it as operating system rather than as software on our system). I used proxmox — open source solution, which will allow me to easily manage virtual machines. Now, to avoid overthinking and making the journey harder than it really needs I created one virtual machine per host. For the operating system I went with debian — it’s well supported, simple and stable — we definitely don’t want adding more complexity here, as it’s not our goal.
Step 3: Easy and Secure Access
During the installation process (and maybe in the future) we’ll need to configure things on our debian machines. I’ve decided to take some extra steps to ensure bare minimum security standards are met here: generated ssh-keys, uploaded them to debians, disabled password login, disabled root login, added my ssh key password to keychain, configured .ssh/config. Now I can easily ssh into the machines — using my private key, without having to worry about password — using simple ssh kubeadmin and ssh kubenode.
With that out of the way, let’s get to it — time to configure our own kubernetes cluster.
Let’s get building — Node Preparation
(remember to do this on all of your hosts!)
After a bit lengthy introduction, it’s finally time to start configuring the cluster. This part will be split into two substeps — we will first prepare hosts to have all required software and configuration to be at least a node in a cluster. After that we will pick one node to upgrade it to control plane. Once we have control plane up and running, all that we will have left will be to join our other node to the cluster et voilà our cluster is ready to go.
First very important thing we need to do is disabling swap. For those unfamiliar with swap — it’s basically a “free” additional RAM. If it’s configured the system can use disk storage to help itself when it’s running out of available RAM. The tradeoff is performance — as it’s noticeably slower. Kubernetes is aiming to utilise 100% of available resources and guarantee stable performance across cluster. To achieve this, decision was made to NOT support swapfiles.
To disable swap we first need to edit /etc/fstab file:
sudo vim /etc/fstab
Then, we need to run:
swapoff -a
To disable swap on all known devices and files (both from /proc/swap and /etc/fstab)
Another change kubernetes needs is IPv4 forwarding enabled — to allow network packets to be routed between pods and nodes, ensuring seamless communication across the cluster. This will be needed for pod-to-pod communication, service routing, and the functionality of network plugins.
To achieve this run:
sudo sed -i "s/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/" /etc/sysctl.conf
Which uncomments the line allowing for this behaviour.
Docker time?
Now we need to install some things that will be required to run our cluster. I decided to link to official installation documentation for two reasons:
- time-proofing
- availability of instructions for different systems
I believe those shared here are really well made and if followed carefully shouldn’t give too much issues (well I had some, but it was due my overconfidence blinding me).
First we will need to install docker:
I really encourage you to take a minute and also enable non-root user to run docker commands. This makes debugging SO much easier and faster.
As we followed the guide step by step, without skipping anything (right?) we can now enjoy docker working on our system — let me warn you, if you had issues here, especially if they stemed from skipping over some steps or missing one line I recommend taking a break and brewing some coffee — it only goes dirtier from now on. I spent few hours debugging stupid mistakes and hope to help you avoid them. For now though let’s just enjoy our hello world:
The one and only Kube… damn why isn’t it working yet!
Not gonna lie, this part definitely added some grey hair on my head. Let’s make sure it’s not the case for you.
The entrypoint for documentation is located here:
But who would even read that, let’s just jump right in. We first need to install three utilities:
- kubeadm — the tool required to build clusters. We will use it later to bootstrap our Control Plane, and later join additional nodes to cluster.
- kubectl — our bread and butter command line tool for communicating with a Kubernetes cluster’s control plane, using the Kubernetes API.
- kubelet — this is our “node agent”, ensuring our containers run in pods.
We can build them ourselves, install from sources, but I recommend just using prepackaged versions from apt repository:
Make sure to pin versions (by running apt hold) — we want to manage the versions by hand — without worrying on system update that we will mess with the cluster.
Now we have everything we need, let’s glue it up together into kubernetes cluster. Each node needs to have container runtime configured — in our case we will use containerd.
Now we need to talk about control groups (cgroup). This linux kernel feature manages allocation of resources (CPU time, RAM, network bandwidth). Each node will need those to manage resource between pods, as well as to monitor its own resources in order to properly communicate its own state to control plane. To do this we need cgroup driver; luckily for use Debian (since Debian 8) uses systemd as its init system — which by default consumes root control group and takes over managing resource allocation.
We need to achieve two things:
- enable Container Runtime Interface (CRI) which is by default disabled in containerd.
- configure containerd to use systemd’s cgroup drivers
To achieve that need to add this to the /etc/containerd/config.toml:
version = 2
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
Then run
sudo sed -i 's/disabled_plugins = \["CRI"\]/disabled_plugins = \[\]/' /etc/containerd/config.toml
To enable CRI (by removing it from disabled plugins)
We. Are. Ready.
We’ve got everything we need. We configured the host, we downloaded and configured software. Time to launch our cluster.
I was so happy to run the command to initialise my cluster (I won’t share it yet, to prevent you all from running it before reading, thank me later). But this was my mistake, which led to MULTIPLE cluster removals and re-initialisations. Why? Because sometimes it’s better to first read the whole instruction.
Yes I lied to you. We still need one component for our cluster to work. But this will be added after initialisation. We’ve got our nodes managing containers, pods and hardware resources… But who will manage networking in our cluster? Don’t worry, that’s what Container Network Interface is for. But that’s just a standard. We need implementation. After quick research I decided to go with Flannel — it’s small, simple and well known. It will suffice for now, and if we decide to explore more advanced use cases in the future, well change it.
“But you said we’ll add it later, why delay initialisation?” — great question dear reader. And the answer is quite simple — we need to make sure our cluster’s CIDR matches the one used in Flannel. To achieve this, we’ll pass one flag to our kubeadm — I decided to use Flannel’s default CIDR — 10.244.0.0/16 — but feel free to use other — just remember to change it BOTH in cluster initialisation and later in deployment of CNI.
Our final command to start up our cluster is:
kubeadm init - pod-network-cidr 10.244.0.0/16
Make sure to follow the instruction on your screen, ensuring you use only ONE set of instructions: either the one for normal user OR root user. You will be able to revert mistakes, but might waste you some time and nerves.
Also take a note of your join command, you’ll need it for the other nodes to join the cluster.
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can view the state of your pods by running:
kubectl get pods --all-namespaces
Make sure to look at the status of your pods. If you did everything correctly, everything should be in RUNNING state, except for the coredns pods, as they’ll be stuck in PENDING until you install your CNI (as mentioned before I used Flannel):
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
If everything went without issues, you can now run
sudo systemctl restart kubelet
sudo systemctl restart containerd
After they restart, you can view state of your pods again, and you should see everything running properly.
Now you can repeat the process (up until kubeadmin init) on all of your other nodes, and once they are prepared just run your join command — returned during initialisation. As long as your nodes are connected and you didn’t skip any steps, you should quickly be able to see new node in the cluster:
kubectl get nodes
Now we are ready for next week, when we finally start interacting with our cluster!
Behind the scenes
While the process doesn’t seem too hard, and everything seems to be properly documented, it took me a while to finally get it working.
My first mistake was to not read ahead and understand CNIs properly. Second one was only skimming through cgroup instruction which resulted in my control plane having MAJOR issues:
It was really hard to debug too, as my apiserver kept dying — denying me option to even look at logs through kubectl. Took me a while to realise I can also just use crictl — client for CRI.
Pulling my hair out, I finally found a reddit post, which 100% represented my feelings:
There, among the comments I found my hero:
Hope you enjoyed our journey together through the dark and cold world of setting up our own cluster. The fun is about to begin — soon we’ll move onto deploying and configuring apps inside the cluster.
If you had any issues setting your environment feel free to ask questions in the comments, I’ll be happy to help!
In the meantime if you haven’t yet, see other posts from this series: https://medium.com/@rakowskiii/journey-to-mastering-kubernetes-the-introduction-3ff7b26b76db