Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications that Google originally designed and is now maintained by the Cloud Native Computing Foundation**
We have achieved the Kubernetes-cluster setup artifact with the below pre-requisites
One master node (Ubuntu - Desktop OS 64 bite), 2 worker nodes (Ubuntu Server OS - 64 bit)
Worker nodes are VM instances in the same machine (with externally available instances,; we have to use whatever master node uses network interface, eth0,eth1 and wifi)
Worker node should be built with a minimum of 2 GB RAM, 2 cores, and 30 GB ROM
Note: Need to check connectivity among all nodes, ssh or ping service
Below 4 steps we need to execute on each node (both master & slave nodes)
- sudo apt-get update && sudo apt-get install -qy docker.io
- sudo apt-get update && sudo apt-get install -y apt-transport-https && curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
- echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" sudo tee -a /etc/apt/sources.list.d/kubernetes.list && sudo apt-get update
- sudo apt-get update && sudo apt-get install -y kubelet kubeadm kubernetes-cni
Configure cgroup driver used by kubelet on Master Node
Make sure that the cgroup driver used by kubelet is the same as the one used by Docker. Verify that your Docker cgroup driver matches the kubelet config
- docker info | grep -i cgroup cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
- sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
- kubeadm init sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address= (secure cluster init, we have to store the init results)
- sudo useradd -G sudo -m -s /bin/bash
- sudo passwd
- sudo su
- cd $HOME
- sudo cp /etc/kubernetes/admin.conf $HOME/
- sudo chown
$(id -u):$ (id -g) $HOME/admin.conf - export KUBECONFIG=$HOME/admin.conf
- echo "export KUBECONFIG=$HOME/admin.conf" | tee -a ~/.bashrc
- source ~/.bashrc
Apply your pod network (flannel)
- kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
- kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
Before execute join command in each worker node, we suppose to disable swapoff
- <swapoff -a>
- <sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab>
- sudo kubeadm join --token token# master-node-ip:6443 --discovery-token-ca-cert-hash sha256:hash#
- kubectl get nodes (should display all nodes, which all are connected with kubeadm - join with token)
If anything goes wrong like worker nodes not appearing in or problem with creating pods and container in worker-node can reset the process.
need to execute below statements in every node include master-node, after below statements we need to start from step-1 again
-
kubeadm reset
-
service docker restart
-
systemctl kubelet restart
-
kubectl get nodes
-
kubectl cluster-info
-
kubectl config view
-
kubectl get pods -o wide
-
kubectl get deployments
-
kubectl describe pods
-
kubectl logs pod
-
kubectl run pod-name --image=image#:tag (pod creation)
-
kubectl get services/svc
-
kubectl describe service service-name
-
kubectl scale deployment name --replicas=3
-
kubectl delete service service-name
-
kubectl expose deployment/pod-name --type=LoadBalancer/NodePort --port=service-port
-
kubectl get ingress/ing
-
kubectl describe ingress ingress-name#
. kubectl run webserver --image=nginx:alpine --replicas=2 . kubectl expose deployment webserver --type=LoadBalancer --port=80
. kubectl run camunda --image=camunda/camunda-bpm-platform:latest --replicas=2 . kubectl expose deployment/camunda --type=LoadBalancer --port=8080
. kubectl run wso2apim --image=isim/wso2apim . kubectl expose deployment/wso2apim --type=LoadBalancer --port=9443
. kubectl run wso2esb --image=isim/wso2esb . kubectl expose deployment/wso2esb --type=LoadBalancer --port=9443