Home Kubernetes Cluster
Post
Cancel

Kubernetes Cluster

In this article, we will look at creating our small Kubernetes Cluster in our Home Lab. The Cluster will consist of 1 (one) Control Plane and 2 (two) Nodes. The control plane manages the worker nodes and the Pods in the cluster. And our nodes are components run on every node, maintaining running pods and providing the Kubernetes runtime environment.

<< Previous Xen Orchestra – Add Ubuntu Server VM’s

Control Plane and Node Setup

Let’s start with the Control Plane. SSH into the Control Plane virtual machine that we created earlier and follow the steps below.

After we complete these steps on the Control Plane, come back here and follow the same steps for each of our 2 Node virtual machines.

We will start with making sure everything is updated in the OS.

1
sudo apt-get update && sudo apt-get upgrade && sudo apt-get dist-upgrade && sudo apt-get autoremove

Then, let’s install nfs-common. You don’t have to do this step, but I connect my virtual machines with an NFS Mount from my home NAS. I mainly use this for storing my SQL Server database files to a durable location.

1
sudo apt install nfs-common

Turn off swap

In order to install and run the services we need for Kubernetes, we will need to disable the swap space. I use nano for my file editor on Ubuntu.

First the command to turn it off

1
sudo swapoff -a

And then lets disable it permanently by updating the fstab file.

1
sudo nano /etc/fstab

While we are in the editor, find the line that is highlighted in the image below and comment it out. Put a hash (#) in front of this line in the file and then save off the changes.

Alt text

No Docker any longer, Install containerd

With Docker being depricated in Kubernetes, let’s install containerd as our container runtime.

Prerequisites

We will need to install 2 prerequisites. We will start by loading the 2 modules.

1
2
sudo modprobe overlay
sudo modprobe br_netfilter

And then we will configure them to load on boot.

1
2
3
4
5
6
7
8
9
10
11
12
cat<<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF


#Setup required sysctl params, these persist across reboots.
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

Next step is to then apply systctl parameters without a reboot.

1
sudo sysctl --system
Install containerd

Make sure everything is updated and then install containerd.

1
2
sudo apt-get update
sudo apt-get install -y containerd

containerd Configuration File

We will need to create a containerd configuration file. Let’s create a new directory for our config file and then grab the default config file and output that to a new file named config.toml.

1
2
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml

This file will need a simple tweak to set the cgroup driver for containerd to systemd. This is required for the kubelet. For more info, check out this GitHub and this one.

So, lets look into this new file.

1
sudo nano /etc/containerd/config.toml

Find the section:

1
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]

Alt text

And at the end of that section, after the “priveleged without host devices = false”, add in the following lines (and remember, indentation does matter here.

1
2
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
  SystemdCgroup = true

Your section should look like this:

Alt text

Now go ahead and save off that file and we will restart containerd with our new configuration.

1
sudo systemctl restart containerd

Install Kubernetes Packages – kubeadm, kubelet, and kubectl

For starters, we need to add Google’s apt repository gpg key.

1
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

And then add the Kubernetes apt repository and then update the repository again.

1
2
3
4
5
sudo bash -c 'cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF'

sudo apt-get update
Install the packages

I will not be installing a specific version of the packages and I will not be marking a hold on any of these packages since we are only doing this for our home lab.

1
sudo apt-get install kubelet kubeadm kubectl

If you wanted to look after a specific version and then lock in those versions, you could go this route.

1
2
3
4
5
6
7
#Use apt-cache policy to inspect versions available in the repository.
apt-cache policy kubelet | head -n 20
#Designate the version that you want to install.
VERSION=1.20.1-00
sudo apt-get install -y kubelet=$VERSION kubeadm=$VERSION kubectl=$VERSION
#Lock in the verion and have the system hold this verion and not update it.
sudo apt-mark hold kubelet kubeadm kubectl containerd

Let’s now ensure that both our kubelet service and our containerd service both start up when the system starts.

1
2
sudo systemctl enable kubelet.service
sudo systemctl enable containerd.service

Control Plane Specific Setup

This section is the instructions for setting up the Control Plane to do its job. We don’t want to run this on the Nodes, just the Control Plane.

I will use Calico for my cluster’s networking and network security.

Create our kubernetes cluster, specify a pod network range matching that in calico.yaml. Again, only on the Control Plane Node, download the yaml files for the pod network.

1
wget https://docs.projectcalico.org/manifests/calico.yaml

Now, Look inside calico.yaml and find the setting for Pod Network IP address range CALICO_IPV4POOL_CIDR, adjust if needed for your infrastructure to ensure that the Pod network IP range doesn’t overlap with other networks in our infrastructure. I have been able to leave mine.

1
sudo nano calico.yaml

Generate a default kubeadm init configuration file…this defines the settings of the cluster being built.

If you get a warning about how docker is not installed…this is OK to ingore and is a bug in kubeadm.

1
kubeadm config print init-defaults | tee ClusterConfiguration.yaml

Inside default configuration file, we need to change three things.

  1. The IP Endpoint for the API Server localAPIEndpoint.advertiseAddress.
  2. nodeRegistration.criSocket from docker to containerd.
  3. Set the cgroup driver for the kubelet to systemd, it’s not set in this file yet, the default is cgroupfs

Change the address of the localAPIEndpoint.advertiseAddress to the Control Plane Node’s IP address. Note that my Control Plane VM is set to 192.168.86.51 on my local network.

1
sudo sed -i 's/  advertiseAddress: 1.2.3.4/  advertiseAddress: 192.168.86.51/' ClusterConfiguration.yaml

Set the CRI Socket to point to containerd.

1
sudo sed -i 's/  criSocket: \/var\/run\/dockershim\.sock/  criSocket: \/run\/containerd\/containerd\.sock/' ClusterConfiguration.yaml

Set the cgroupDriver to systemd…matching that of your container runtime, containerd.

1
2
3
4
5
6
cat <<EOF | cat >> ClusterConfiguration.yaml
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF

And take a quick peek that your IP address has been set correctly in the ClusterConfiguration.yaml file.

1
nano ClusterConfiguration.yaml

Need to add CRI socket since there’s a check for docker in the kubeadm init process, if you don’t you’ll get this error…

1
error execution phase preflight: docker is required for container runtime: exec: "docker": executable file not found in $PATH
1
2
3
sudo kubeadm init \
    --config=ClusterConfiguration.yaml \
    --cri-socket /run/containerd/containerd.sock

Configure our account on the Control Plane Node to have admin access to the API server from a non-privileged account.

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

It’s now time to create our Pod network and deploy the yaml file to create it.

1
kubectl apply -f calico.yaml

This process will create a token that we need to use in order to join our Nodes to the Cluster. The token that gets created for us is only valid for a short time. Because I use this cluster internally and it is a home lab cluster, I create a non-expiring token.

I list out the tokens, delete the initial token that in there, and then create my new token (that doesn’t expire).

1
2
3
4
kubeadm token list
kubeadm token delete <token>  #  will be something like ab12igkd.9ddoqimeufe21ek6
# create new token
kubeadm token create --ttl 0

Cluster Nodes Setup

Now we will set up the Nodes and we will only have to issue one command now, on your Nodes, to join them to the Cluster. In order to join them, we need to know our Control Plane’s IP address, which, in my case, I already know is 192.168.86.51. We also need 2 bits of information from the Control Plane. We need 2 bits of information from that token we created back on our Control Plane.

Here is the command we need. But wait! Don’t try to run this until we fill in the <> items!

1
2
3
sudo kubeadm join 192.168.86.51:6443 \
  --token <token> \
  --discovery-token-ca-cert-hash sha256:<token-hash>

First, go back to the Control Plane and list out the tokens again.

1
kubeadm token list

Then copy the token. Again it will be something like ‘ab12igkd.9ddoqimeufe21ek6’. Paste this in place of the <token> placeholder in the command you copied above.

The second piece of information we need is the hash from our token. In order to get this, issue this command:

1
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

You should see a hash that is about 64 characters long. Copy this and place that in the <token-hash> placeholder. Your command should now look something more like this:

1
2
3
sudo kubeadm join 192.168.86.51:6443 \
  --token ab12igkd.9ddoqimeufe21ek6 \
  --discovery-token-ca-cert-hash sha256:h65stjsmr7izoszjlvn9dynn3x8k9wh0tjtv20eydd29kp8ge0ps6jkuevaez4bw

And don’t worry, I made up the token and hash, so no leaks from me!!

Run this command on all of your Nodes and they will join the Cluster and be controlled by the Control Plane.

Next Steps

Next up, we will look at our Director VM and set that up to manage our Cluster interactions. We will load Helm on this VM so that we can apply Helm charts to our Cluster.

>> Next Kubernetes Cluster Management

Hope to see you then!

This post is licensed under CC BY 4.0 by the author.