[Kubernetes] Creating Testing Environment with Cloud VMs

One of my previous blogs, I showed how to start with Play with Kubernetes platform to deploy simple Kubernetes cluster as well as running a simple web application within.
The Play with Kubernetes platform is great to learn how to set up the environment perspective; however, it is not ideal for continuous testing lab purposes since it expires.
Cloud VM Spec
- VM OS: Ubuntu 18.04
- Size: 2 Virtual CPU, 4 GiB Memory
Note: To run kubeadm, a minimum of two (2) CPU is required
We will create three (3) VMs with the same spec:
- VM 1 — Master Node
- VM 2 — Worker Node 1
- VM 3 — Worker Node 2
I will skip how to create cloud VMs in this post, but it does not matter which cloud platform you choose as long as the VMs meet the above spec.
Install Commands #1 (Run these in all 3 VMs)
1–1) Login
Log into the cloud VMs and escalate to root
.
1-2) Add GPG key for Docker Repository
# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -OK
1-3) Add Docker Repository
# sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"Hit:1 http://us-west-1.ec2.archive.ubuntu.com/ubuntu bionic InRelease
Hit:2 http://us-west-1.ec2.archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:3 http://us-west-1.ec2.archive.ubuntu.com/ubuntu bionic-backports InRelease
Get:4 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:5 https://download.docker.com/linux/ubuntu bionic InRelease [64.4 kB]
Get:6 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages [15.4 kB]
Fetched 169 kB in 3s (51.0 kB/s)
Reading package lists... Done
1-4) Reload the apt Source List
# sudo apt-get update
1-5) Install Docker
# sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu
1-6) Prevent auto-updates for the Docker Package
# sudo apt-mark hold docker-cedocker-ce set on hold.
1-7) Docker Version
# sudo docker version
Docker version has to be below 18.09; otherwise, kubeadm will complain

1-8) Add GPG Key for Kubernetes Repository
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - OK
1-9) Add Kubernetes Repository
# cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF deb https://apt.kubernetes.io/ kubernetes-xenial main
1-10) Reload the apt Source List
# sudo apt-get update
1-11) Install Kubeadm, Kubectl & Kubelet
# sudo apt-get install -y kubelet=1.15.7-00 kubeadm=1.15.7-00 kubectl=1.15.7-00
1-12) Prevent auto-updates for Kubernetes Packages
# sudo apt-mark hold kubelet kubeadm kubectlkubelet set on hold.
kubeadm set on hold.
kubectl set on hold.
1-13) Kubeadm Version
# kubeadm versionkubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.7", GitCommit:"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4", GitTreeState:"clean", BuildDate:"2019-12-11T12:40:15Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Install Commands #2 (Run these in Master Node Only)
2–1) Initialize the Cluster on the Kubernetes Master Server
# sudo kubeadm init --pod-network-cidr=10.15.0.0/16

At the end of the script, save the following outputs:

2–2) Kubeconfig Setup
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
2–3) Enable the net.bridge Module for Cluster Communications
# echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
2–4) Install Flannel
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.ymlclusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
2–5) Verify the Node Status
# kubectl get nodesNAME STATUS ROLES AGE VERSION
master Ready master 13m v1.15.7
Install Commands #3 (Run these in Worker Nodes Only)
3–1) Join the Worker Nodes to the Master Node
# kubeadm join 172.31.110.90:6443 --token sfe4ud.k34qdmi1373g178b --discovery-token-ca-cert-hash sha256:b7bfcc3fc1a555987a850eec303d1c7ab89176a7ac486f0245a6209b8c32da1a

Now check your Master node and see if the worker nodes are successfully joined to the cluster.
# kubectl get nodesNAME STATUS ROLES AGE VERSION
master Ready master 64m v1.15.7
worker1 Ready <none> 24s v1.15.7
worker2 NotReady <none> 8s v1.15.7
Deleting Worker Nodes from the Cluster
There are some cases you may need to delete a worker node from the cluster. Use the following commands to do so:
# In Master Node
kubectl get nodes
kubectl drain <node-name>
kubectl delete node <node-name># In Worker Node
kubeadm reset