Kubernetes KubeADM Setup on Oracle VirtualBox UbuntuVM

Prasad Pande
8 min readJul 12, 2021

Today we will learn to create a KubeADM Cluster on 2 of our Oracle VirtualBox Machines. The machine configuration should be like below:

Pre-Requisite: Some familiarity with using Oracle VirtualBox

We have to set-up 2 machines in Oracle VirtualBox first –

1) Kubernetes Master (kmaster) (2CPU processors and 3GB RAM and 30GB hard disk)

2) Kubernetes Worker (kworker) (4CPU processors and 8GB RAM and 50GB hard disk) If you do not have enough memory on your system you can assign 2GB or 4GB as well but with lower memory you will not be able to run a lot of Kubernetes pods later on that would consume memory resources when running.

You can install Ubuntu 18.04 (https://releases.ubuntu.com/18.04.5/ubuntu-18.04.5-live-server-amd64.iso)on kmaster and then in the Oracle VirtualBox make a full Clone of kmaster machine to generate another exact replica of the same machine. We will later name it as kworker.

On the next page select Full Clone option and click Clone.

Once you have both the machines ready and available in Oracle VirtualBox — Start them.

Because our Worker machine is the Clone of the Master machine — ensure few things before moving ahead:

Please note while running any of the commands below if you get errors related to Permissions denied or errors like[ERROR IsPrivilegedUser]: user is not running as root then just use ‘sudo’ before your command to run that command with the root user. Please do not use sudo before every command though , use it only when you get an error mentioned above.

1) IP Address of the Master and Worker machine must be different — in my case kmaster had 192.168.1.98 and kworker had 192.168.1.85.

2) Next we need to change the name of the newly cloned kworker machine in /etc/hosts file and /etc/hostname file

Run the below commands to change it:

sudo vim /etc/hostssudo vim /etc/hostname

(if you do not have vim install it using — sudo apt-get install vim -y)

etc/hosts file

/etc/hostname file

Restart the kworker machine for the changes to take effect.

You can observe the name has changed now to kworker from kmaster.

KubeAdm Setup on our 2 Kubernetes Nodes:

Followed the below webpage:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

IMPORTANT: Run the below commands on both the kmaster and the kworker machines until specified to run on a single machine

Do not run the next set of commands as Root user — run as a regular user — whenever needed we will use sudo

ð Run the command –

sudo depmodsudo modprobe br_netfilterlsmod | grep br_netfilter

ð Run below

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

Next we have to install Docker on both machines and perform its related configuration:

Followed the below webpage:

https://docs.docker.com/engine/install/ubuntu/

Older versions of Docker were called docker, docker.io, or docker-engine. If these are installed, uninstall them:

First of all remove any older versions of Docker and related softwares from both of your machines using the below command

sudo apt-get remove docker docker-engine docker.io containerd runc 
Update the apt package index and install packages to allow apt to use a repository over HTTPS:sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
Add Docker’s official GPG keycurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
Use the following command to set up the stable repository.echo \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Update the apt package index, and install the latest version of Docker Engine and containerdsudo apt-get update 
sudo apt-get install docker-ce docker-ce-cli containerd.io 
(at the prompt enter Y)
sudo docker version
At this step both the kmaster and the kworker machine should have the latest version of Docker installedNext Configure the Docker daemon, in particular to use systemd for the management of the container’s cgroups.sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
Restart Docker and enable on boot:sudo systemctl enable dockersudo systemctl daemon-reloadsudo systemctl restart docker

Installing kubeadm, kubelet and kubectl

Update the apt package index and install packages needed to use the Kubernetes apt repositorysudo apt-get updatesudo apt-get install -y apt-transport-https ca-certificates curl
Download the Google Cloud public signing keysudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpgAdd the Kubernetes apt repository:echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
Update apt package index, install kubelet, kubeadm and kubectl, and pin their versionsudo apt-get updatesudo apt-get install -y kubelet kubeadm kubectlsudo apt-mark hold kubelet kubeadm kubectl
kubeadm version

Disable swap on both the kmaster and kworker machines

sudo swapoff -a

(if you machines restart remember to run sudo swapoff -a before you do any other activity)

Creating a cluster with kubeadm

IMPORTANT run these command only on kmaster machine only until specified

Followed the below webpage:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

The next command run will take some time to complete on kmaster — let it complete

kubeadm init --pod-network-cidr <pod_network_cidr> --apiserver-advertise-address=<Kubernetes_Master_IP_Address>use the <pod_network_cidr> value as => 10.244.0.0/16

The output of this above kubeadm init command will be used for further setup: (so dot clear the output)

Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.1.98:6443 --token 239je4.9edi2ig0vqh7ruln \
--discovery-token-ca-cert-hash sha256:d69de31cfb54be1ad8640c948d3a686901ae1df2921e95d9127347d26e782c89

We will need the above kubeadm join command to join kworker node to the cluster later on so save this command from above output.

Next Run these commands as a regular user on kmaster:

mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
cat .kube/config (on kmaster machine)

Installing Pod Network

Followed the below webpage to install Pod network:

https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model

kubectl get pods --all-namespaces (on kmaster machine)

We will be using Flannel Pod networking solution

Run the below on kmaster machine

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Now wait for few minutes to let flannel pods to initialize and start

Next run

kubectl get nodes (on kmaster node)

we can observe the kmaster machine is ready now which means the flannel pod networking solutions is working correctly

Next lets join the kworker node to the Kubernetes Cluster using the join command that came as an output from the Kubeadm init command run earlier

IMPORTANT Run the next command on kworker machine only:

The next command to be run on kworker machine only would take some time to complete so give it some time

# Please do not use the below command as it is - take this kubeadm join command from the output of your kubeadm init command. It should look something similar to below where instead of 192.168.1.98:6443 you will have your <Kubernetes_Master_server_IP_address>:6443 and the token and discovery token value should also be something different. 
Also note the below command needs to be run on the Kubernetes Worker machines only to connect your Worker machine to the Kubernetes cluster and it does not need to run on Kubernetes Master
kubeadm join 192.168.1.98:6443 --token 239je4.9edi2ig0vqh7ruln \
--discovery-token-ca-cert-hash sha256:d69de31cfb54be1ad8640c948d3a686901ae1df2921e95d9127347d26e782c89

The above command output tells us that the kworker command has successfully joined the cluster. (Wait for 40–60secs)

kubectl get nodes (on kmaster machine)

The above command output tells us that both kmaster and kworker machine are a part of the Kubernetes cluster now and our KubeADM setup is successful.

Lets try to run nginx pod just to see if the Kubernetes scheduling works correctly

kubectl run nginx –-image=nginx (on kmaster machine)kubectl get pods -o wide (on kmaster machine)

From the above command you can observe that — nginx pod is created and is running on kworker node now.

--

--