To better understand K8s through hands-on practice, I wanted to set up a K8s cluster myself. There are increasingly many ways to self-host K8s nowadays — minikube, Kubeadm, and more. I decided to go with Kubeadm, but the installation wasn’t exactly smooth. This post shares the problems I encountered and how I solved them.
First Installation of Kubeadm
- Install Ubuntu.
- Install Docker.
- Modify Docker’s cgroup settings to ensure the K8s and container runtime cgroup drivers match, preventing system instability.
sudo vim /etc/docker/daemon.json{ "exec-opts": ["native.cgroupdriver=systemd"] } - Restart Docker
sudo systemctl restart docker
- Modify Docker’s cgroup settings to ensure the K8s and container runtime cgroup drivers match, preventing system instability.
- Install Kubeadm.
- Allow iptables to inspect bridged traffic.
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOF cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system - Update apt and install packages required by K8s.
sudo apt-get update sudo apt-get install -y apt-transport-https ca-certificates curl - Download the Google Cloud public signing key.
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg - Add the K8s apt repository.
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list - Update the apt package index, install kubelet, kubeadm, and kubectl, and pin their versions.
sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl - Initialize K8s.
sudo kubeadm init - Join worker nodes
sudo kubeadm join 192.168.83.130:6443 --token token.... --discovery-token-ca-cert-hash sha256:...................... - Copy kubeconfig
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
- Allow iptables to inspect bridged traffic.
K8s Nodes Not Ready
I finally got it installed! So I added the worker nodes to the cluster. After joining, just as I was about to start using K8s, I noticed all nodes were in NotReady status.
alan@k8s-01:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-03 NotReady <none> 7m17s v1.23.4
k8s-01 NotReady control-plane,master 7m45s v1.23.4
k8s-02 NotReady <none> 7m24s v1.23.4
The reason was that I hadn’t configured a CNI (Container Network Interface) plugin. See kubeadm: master node never ready. So I applied Flannel.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
But after installing it, the nodes were still in an unavailable state. What went wrong this time? After investigating, I discovered that Flannel requires a pod network CIDR (Classless Inter-Domain Routing) setting. Fine, time to start over.
Second Installation
- Install Ubuntu.
- Install Docker.
- Modify Docker’s cgroup settings to ensure K8s and the container runtime cgroup drivers match.
sudo vim /etc/docker/daemon.json{ "exec-opts": ["native.cgroupdriver=systemd"] } - Restart Docker
sudo systemctl restart docker
- Modify Docker’s cgroup settings to ensure K8s and the container runtime cgroup drivers match.
- Install Kubeadm.
- Allow iptables to inspect bridged traffic.
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOF cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system - Update apt and install packages required by K8s.
sudo apt-get update sudo apt-get install -y apt-transport-https ca-certificates curl - Download the Google Cloud public signing key.
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg - Add the K8s apt repository.
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list - Update the apt package index, install kubelet, kubeadm, and kubectl, and pin their versions.
sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl - Initialize K8s.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 - Copy kubeconfig
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config - Configure CNI.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml - Join worker nodes
sudo kubeadm join 192.168.83.130:6443 --token token.... --discovery-token-ca-cert-hash sha256:......................
- Allow iptables to inspect bridged traffic.
Done and done!
zhan@k8s-01:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k82-03 Ready <none> 25h v1.23.4
k8s-01 Ready control-plane,master 25h v1.23.4
k8s-02 Ready <none> 25h v1.23.4
Conclusion
I finally got K8s up and running on my own! From here on, I really need to dig deeper into K8s — otherwise I’ll keep running into weird issues and spending forever debugging them.
Feel free to leave a comment on my blog. Your feedback motivates me to keep writing. Thank you for reading, and let’s grow together to become better versions of ourselves.