[Kubernetes] Creating Testing Environment with Cloud VMs

Image for post
Image for post

Cloud VM Spec

  • VM OS: Ubuntu 18.04
  • Size: 2 Virtual CPU, 4 GiB Memory
  • VM 2 — Worker Node 1
  • VM 3 — Worker Node 2

Install Commands #1 (Run these in all 3 VMs)

1–1) Login

Log into the cloud VMs and escalate to root.

1-2) Add GPG key for Docker Repository

# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -OK

1-3) Add Docker Repository

# sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"Hit:1 http://us-west-1.ec2.archive.ubuntu.com/ubuntu bionic InRelease
Hit:2 http://us-west-1.ec2.archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:3 http://us-west-1.ec2.archive.ubuntu.com/ubuntu bionic-backports InRelease
Get:4 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:5 https://download.docker.com/linux/ubuntu bionic InRelease [64.4 kB]
Get:6 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages [15.4 kB]
Fetched 169 kB in 3s (51.0 kB/s)
Reading package lists... Done

1-4) Reload the apt Source List

# sudo apt-get update

1-5) Install Docker

# sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu

1-6) Prevent auto-updates for the Docker Package

# sudo apt-mark hold docker-cedocker-ce set on hold.

1-7) Docker Version

# sudo docker version
Image for post
Image for post

1-8) Add GPG Key for Kubernetes Repository

# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - OK

1-9) Add Kubernetes Repository

# cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list 
deb
https://apt.kubernetes.io/ kubernetes-xenial main
EOF
deb https://apt.kubernetes.io/ kubernetes-xenial main

1-10) Reload the apt Source List

# sudo apt-get update

1-11) Install Kubeadm, Kubectl & Kubelet

# sudo apt-get install -y kubelet=1.15.7-00 kubeadm=1.15.7-00 kubectl=1.15.7-00

1-12) Prevent auto-updates for Kubernetes Packages

# sudo apt-mark hold kubelet kubeadm kubectlkubelet set on hold.
kubeadm set on hold.
kubectl set on hold.

1-13) Kubeadm Version

# kubeadm versionkubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.7", GitCommit:"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4", GitTreeState:"clean", BuildDate:"2019-12-11T12:40:15Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

Install Commands #2 (Run these in Master Node Only)

2–1) Initialize the Cluster on the Kubernetes Master Server

# sudo kubeadm init --pod-network-cidr=10.15.0.0/16
Image for post
Image for post
Image for post
Image for post

2–2) Kubeconfig Setup

mkdir -p $HOME/.kube 
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

2–3) Enable the net.bridge Module for Cluster Communications

# echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf

2–4) Install Flannel

# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.ymlclusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

2–5) Verify the Node Status

# kubectl get nodesNAME     STATUS   ROLES    AGE   VERSION
master Ready master 13m v1.15.7

Install Commands #3 (Run these in Worker Nodes Only)

3–1) Join the Worker Nodes to the Master Node

# kubeadm join 172.31.110.90:6443 --token sfe4ud.k34qdmi1373g178b --discovery-token-ca-cert-hash sha256:b7bfcc3fc1a555987a850eec303d1c7ab89176a7ac486f0245a6209b8c32da1a
Image for post
Image for post
# kubectl get nodesNAME      STATUS     ROLES    AGE   VERSION
master Ready master 64m v1.15.7
worker1 Ready <none> 24s v1.15.7
worker2 NotReady <none> 8s v1.15.7

Deleting Worker Nodes from the Cluster

There are some cases you may need to delete a worker node from the cluster. Use the following commands to do so:

# In Master Node
kubectl get nodes
kubectl drain <node-name>
kubectl delete node <node-name>
# In Worker Node
kubeadm reset

OSCE | OSCP | CREST | Offensive Security Consultant — All about Penetration Test | Red Team | Cloud Security | Web Application Security

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store