BYOk8s — Build your own k8s cluster

4 min readFeb 1, 2021


Today, I will show you how you can run your own multi-region k8s cluster. First of all, you need to have 3 working VM ready so that we can work on them. For this demo, I have created 3 VMs on vultr cloud provider which can be seen in the following image.

Now, once you have VM working, there are a couple of things that need to be done.

System configuration and Installing kubelet, kubeadm and kubectl (On all nodes)

First, edit the host file of each VM public IPs and hostname in /etc/hosts file.

Master node master-node
IP_OF_WORKER worker-node1
IP_OF_WORKER_2 worker-node2

Worker node 1 worker-node1
IP_OF_MASTER master-node
IP_OF_WORKER_2 worker-node2

Worker node 2 worker-node2
IP_OF_MASTER master-node
IP_OF_WORKER_2 worker-node1

Once you have done this, you can now try to ping all three using their hostnames.

Its working now let's add the yum repository for Kubernetes and Docker.

Run the following command to add the repository for Kubernetes and docker in all three nodes.

Make sure you run all these commands on all nodes.

cat <<EOF > /etc/yum.repos.d/kubernetes.repo


sudo yum-config-manager --add-repo

Once you have done this, there is a couple of installation that needs to be done. I will copy-paste the whole command, please make sure you know what you are doing.

Install kubelet, kubeadm and kubectl

sudo yum install -y kubelet kubeadm kubectl

Start and enabled kubelet process

systemctl enable kubelet
systemctl start kubelet

Now, let's set the hostname based on the /etc/hosts file

On Master

sudo hostnamectl set-hostname master-node

On worker node 1

sudo hostnamectl set-hostname worker-node1

On worker node 2

sudo hostnamectl set-hostname worker-node2

Now, take time and make sure you can ping all of the servers with one another using their hostnames.

Firewall configuration (On All Nodes)

Now, we need to do some firewall configuration. Allow the following port to be enabled in the OS firewall on all the nodes.

sudo firewall-cmd --permanent --add-port=6443/tcp
sudo firewall-cmd --permanent --add-port=2379-2380/tcp
sudo firewall-cmd --permanent --add-port=10250/tcp
sudo firewall-cmd --permanent --add-port=10251/tcp
sudo firewall-cmd --permanent --add-port=10252/tcp
sudo firewall-cmd --permanent --add-port=10255/tcp
sudo firewall-cmd --reload

Now, we need to enable bridge networking so that packets coming in on the bridge interface are processed by IPtables.

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

reload system

sysctl --system

Now, disable SELINUX and swap as it causes problems to kubelet process.

sudo setenforce 0
sudo sed -i ‘s/^SELINUX=enforcing$/SELINUX=permissive/’ /etc/selinux/config
sudo sed -i '/swap/d' /etc/fstab
sudo swapoff -a

Install docker (On All Nodes)

We will be using docker as CNI for Kubernetes.

sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo
sudo yum update -y && sudo yum install -y docker-ce-19.03.11 docker-ce-cli-19.03.11

Create a docker directory for docker configuration to be stored.

mkdir -p /etc/docker

Adding the following configuration in the daemon.json file.

cat <<EOF | sudo tee /etc/docker/daemon.json
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
"storage-driver": "overlay2",
"storage-opts": [

Create systemd directory for docker service

sudo mkdir -p /etc/systemd/system/docker.service.d

reload daemon and restart docker

sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl enable docker

Initializing kubernetes cluster (On Master Node)

Now run the following command to initialize Kubernetes master node

sudo kubeadm init --pod-network-cidr= --ignore-preflight-errors=NumCPU,Mem

Pre-flight checks for CPU and Memory requirements and fails the setup. For learning purpose, we do not need this.

Now create .kube where your config will reside

mkdir -p $HOME/.kube

Copy the kubeconfig file to .kube directory.

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Kubernetes needs CNI (Container Network Interface) to communicate between pods. We are going to use flannel as CNI for our cluster.

sudo kubectl apply -f

Now we are done on master. Wait for some time and run.

kubectl get nodesNAME           STATUS   ROLES                  AGE   VERSION
master-node Ready control-plane,master 18m v1.20.2

Adding worker nodes to the cluster (On worker nodes)

Paster the kubeadm join which was provided when you ran kubeadm init command.

NOTE: If you are using cloud VM, make sure port 6443 is enabled on your VM so that the master and worker nodes can communicate with each other.

kubeadm join --token 6phmnm.xu4ot5287s5zsa5t     --discovery-token-ca-cert-hash sha256:1d01487792f54ec95d56d3a5cf9525858411da6b4b9c5d8db80f3275b182a693 [preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Once you are done with this, wait for some time and run kubectl get nodes. You should see your cluster ready.

kubectl get nodesNAME           STATUS   ROLES                  AGE   VERSION
master-node Ready control-plane,master 34m v1.20.2
worker-node1 Ready <none> 30m v1.20.2
worker-node2 Ready <none> 30m v1.20.2

That’s it, you have your vanilla k8s cluster ready.


In this tutorial, we did get a good experience of setting multi-region Kubernetes cluster. Hope you learned something.




Senior Site Reliability Engineer & Backend Engineer | Docker Captain 🐳 | Auth0 Ambassador @Okta |