-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathTheory.txt
221 lines (189 loc) · 12.8 KB
/
Theory.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
**KUBERNETES**
-- Container orchestration system.
#Kubernetes(K8S) deals with ->
- Automatic deployments of the containerized application across different servers.
- Distribution of the load across multiple servers.
- Auto-scaling of the deployed applications.
- Monitoring & health check of the containers.
- Replacements of the failed containers.
#Runtimes supported by Kubernetes ->
- Docker
- CRI-O
- containerd
**Terminologies & Architecture
--> *Pod* - It is the smallest unit of a Kubernetes cluster (analogous: Docker container).
--> A Pod may contain --> multiple containers, Shared volumes, shared IP Address.
--> *Kubernetes Cluster* --> Contains different nodes (servers) [which in-turn house different pods].
|
|
V
Kubernetes cluster contains 2 types of nodes :
1. Master Node : Orchestrates the worker nodes, by distributing the load among the worker loads.
2. Worker Node : Carries out the operation/load assigned to it by master node.
--> Master node contains --> API server, Scheduler, Kube Controller Manager, Cloud Controller Manager, etcd, DNS Services, kubelet, kube-proxy, container runtime.
--> Worker node contains --> kubelet, kube-proxy, container runtime.
--> Master node communicates with the kubelet of the worker nodes through its API server.
--> kube-proxy is responsible for the network connection of each node.
--> Scheduler - Orchestrates & manages the load distribution among different worker nodes.
--> Kube Controller Manager - Controls the functioning of each node of a Kubernetes cluster.
--> Cloud Controller Manager - Deals with the cloud services for our deployment server (including auto-scaling the deployment). It also manages all the proxy/IP address for the web to communicate with our server.
--> etcd - stores all deployment logs as key value pairs.
--> DNS Services - Responsible for names resolution in the entire Kubernetes cluster. It can be used to connect 2 different services by their names.
--> kubectl --> Kube control -> A CLI tool to control kubernetes cluster. --> HTTPS comms. b/w API Server of master node of a kube cluster & kubectl.
**Kubernetes using Minikube package manager
#CLI commands
--> minikube start --driver=hyperv --> Spins up a Kubernetes cluster through minikube inside a virtual env., here - Hyper-V (for windows).
--> minikube ip --> returns the ip on which minikube is running.
--> ssh docker@<minikube-ip> --> spins up ssh to access the minikube server over the web.
--> kubectl cluster-info --> Returns info about the active kube clusters.
eg: PS C:\Windows\system32> kubectl cluster-info
Kubernetes control plane is running at https://172.18.203.146:8443
CoreDNS is running at https://172.18.203.146:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
--> kubectl get nodes --> fetches all the running nodes.
--> kubectl get pods --> fetches all the pods inside the kubernetes nodes
--> kubectl get namespaces --> fetches the kubernetes namespaces inside minikube server(kubernetes).
eg:
PS C:\Windows\system32> kubectl get namespaces
NAME STATUS AGE
default Active 18m
kube-node-lease Active 18m
kube-public Active 18m
kube-system Active 18m
//Gets the running pods of kube-system namespace, ie, master node
PS C:\Windows\system32> kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
coredns-7db6d8ff4d-2krjb 1/1 Running 0 23m
etcd-minikube 1/1 Running 0 23m
kube-apiserver-minikube 1/1 Running 0 23m
kube-controller-manager-minikube 1/1 Running 0 23m
kube-proxy-khrmx 1/1 Running 0 23m
kube-scheduler-minikube 1/1 Running 0 23m
storage-provisioner 1/1 Running 1 (23m ago) 23m
**Kubernetes Pods
--> kubectl run <container-name> --image=<image to be present inside the container> --> Spins up a pod inside a node of Kubernetes cluster.
eg: kubectl run nginx --image=nginx --> spins up a container named "nginx" with docker's default nginx image, inside a pod
--> kubectl describe pod <pod name> --> Provides info bout a pod.
NB: > By default, a pod is assigned to kubernetes "default" namespace.
> docker ps | grep nginx --> command inside docker ssh to filter out only nginx image, instead of fetching all images (by only docker ps).
$ docker ps | grep nginx
2498e9a6a78a nginx "/docker-entrypoint.…" 9 minutes ago Up 9 minutes k8s_nginx_nginx_default_9e5299a2-5a30-4e47-ad54-b0e51c01a6ec_0
44cb05b529e0 registry.k8s.io/pause:3.9 "/pause" 9 minutes ago Up 9 minutes k8s_POD_nginx_default_9e5299a2-5a30-4e47-ad54-b0e51c01a6ec_0 --> Here the "/docker-entrypoint.…" container is the main functional container, while "/pause" container is used to retain the namespace of nginx pod in a default shared namespace environment of multiple pods.
**Connecting to the created nginx container through docker SSH inside minikube
$ docker exec -it 2498e9a6a78a sh //connecting to nginx container by its id & enabling sha
# hostname
nginx
# hostname -i
10.244.0.3
# curl 10.244.0.3
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
--> kubectl get pods -o wide --> Provides the running pods along with their IP addresses. This IP address is the address of the IP of the VM inside which minikube is running, not the local IP of our system.
--> kubectl delete pod <pod-name> --> deletes a pod.
--> alias k="kubectl" --> creating an alias for kubectl cmd inside Linux/Git Bash for a single session. Therefore now, kubectl get pods = k get pods (ie, k & kubectl are the same cmds).
**Kubernetes Deployments
--> k create deployment nginx-deployment --image=nginx --> creating an NGINX deployment pod named "nginx-deployment". The nginx pod hence created is managed by the deployment, autmoatically for maintenance & scaling. (By default, a deployment is created in default namespace).
--> ReplicaSet --> Scaled up replicas of a deployed pod.
--> kubectl/k scale deployment <deployment-name> --replicas=<replica-amt.> --> Scale-up/scale-down a deployment upto a desired amount = <replica-amt.>
**Kubernetes Services
--> k expose deployment <deployment-name> --port=<public port where we want to expose the deployment> --target-port=<port of the pods inside VM> --> Creates a global clusterIP which can used to access all the deployed pods inside a K8S cluster (instead of accessing them by their individual IPs).
eg: k expose deployment nginx-deployment --port=8080 --target-port=80 --> exposes "nginx-deployment" to clusterIP=8080, such that all the nginx pods running locally inside port 80 of the K8S cluster are accessible at a common port = 8080 of K8S cluster.
NB: Kubernetes automatically distributes the load among the deployed pods, & by using services they can be easily accessed vis a single clusterIP, instead of their individual unique IPs, which is not a feasible process in a production env.
eg:
k get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15m
nginx-deployment ClusterIP 10.109.78.66 <none> 8080/TCP 12s
NB: also note, this cluster IP is the IP address of the K8S cluster running inside a VM, not our local machine & is not accessible there.
$ k describe service nginx-deployment
Name: nginx-deployment
Namespace: default //default namespace where services are created
Labels: app=nginx-deployment
Annotations: <none>
Selector: app=nginx-deployment
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.109.78.66
IPs: 10.109.78.66
Port: <unset> 8080/TCP
TargetPort: 80/TCP
Endpoints: 10.244.0.3:80,10.244.0.4:80,10.244.0.5:80 //pods where load is distributed
Session Affinity: None
Events: <none>
**Deploying a web-server using K8S & Docker hub
--> k create deployment k8s-web-app --image=insanelytamojit/k8s-web-app --> creating a deployment by pulling the custom image "insanelytamojit/k8s-web-app" from dockerhub, which have been previously pushed into dockerhub.
--> k expose deployment k8s-web-app --port=3000 --> creating a service to expose the deployment to port 3000 to generate clusterIP.
--> k scale deployment k8s-web-app --replicas=4 --> scaling deployment to 4 pods.
|
V
$ curl 10.106.185.183:3000; echo
Hello from k8s-web-app-575bb48d5-d2f59 //pod 1
$ curl 10.106.185.183:3000; echo
Hello from k8s-web-app-575bb48d5-6jrmd //pod 2
$ curl 10.106.185.183:3000; echo
Hello from k8s-web-app-575bb48d5-6jrmd
$ curl 10.106.185.183:3000; echo
Hello from k8s-web-app-575bb48d5-d2f59 //pod 3
$ curl 10.106.185.183:3000; echo
Hello from k8s-web-app-575bb48d5-6jrmd
$ curl 10.106.185.183:3000; echo
Hello from k8s-web-app-575bb48d5-mschv //pod 4
--> Load balanced automatically across 4 pods by K8S.
# NodePort Service : Used to connect to deployments using node IP address from our local machines itself.
--> k expose deployment k8s-web-app --type=NodePort --port=3000 -> creating a deployment of type=NodePort.
eg:
$ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
k8s-web-app NodePort 10.111.72.117 <none> 3000:30376/TCP 5s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h26m
$ minikube ip
172.27.189.212
now, in our local browser, if we connect to 172.27.189.212:30376, we can access the response returned by the server, without having to remotely connect to the k8s cluster of the pod, in which the server has been deployed.
# LoadBalancer Service : Similar to NodePort, with an external IP (pending status in minikube, while automatically assigned for cloud services like AWS).
--> k expose deployment k8s-web-app --type=LoadBalancer --port=3000 -> creating a LoadBalancer type deployment.
$ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
k8s-web-app LoadBalancer 10.97.237.162 <pending> 3000:32080/TCP 10s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h58m
$ minikube service k8s-web-app
|-----------|-------------|-------------|-----------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-------------|-------------|-----------------------------|
| default | k8s-web-app | 3000 | http://172.27.189.212:32080 |
|-----------|-------------|-------------|-----------------------------|
🎉 Opening service default/k8s-web-app in default browser...
The server will be made available on : http://172.27.189.212:32080
**Rolling Update : Update the changes of the code base without outage of service. This is done by creating new images, while the older ones are still functional.
* Docker cmds to create & push updated image to dockerhub -->
> docker build . -t insanelytamojit/k8s-web-app:2.0.0 //building version 2.0.0
> docker push insanelytamojit/k8s-web-app:2.0.0 //pushing to DockerHub
--> k set image deployment k8s-web-app k8s-web-app=insanelytamojit/k8s-web-app:2.0.0 --> replacing old images with the newly pushed 2.0.0 image
k rollout status deploy k8s-web-app --> rolling out the updates to the deployments.
NB: > Since we have specified a fixed no. of replicas for deployment (here = 4), K8S maintains this ReplicaSet throughout the compute. That means, even if one pod is deleted by us manually, a new pod will be immediately created by K8S, to maintain the ReplicaSet automatically.
> k delete all --all --> deleting all resources (deployments, pods, services) present in "default" namespace.
**Specification/Config yaml/yml files to deploy clusters.
--> kubectl apply -f deployment.yaml --> creating/applying a deployment in accordance to the specifications we have specified in deployment.yaml file.
--> k delete -f deployment.yaml -f service.yaml --> deleting the services & deployment.
**NGINX endpoint connection to main server
--> kubectl apply -f k8s-web-app-nginx.yaml -f nginx.yaml --> spinning up NGINX services & deployments.
**Changing K8S container runtime from Docker to CRI-O
--> minikube start --driver=hyperv --container-runtime=cri-o --> creating a minikube cluster in Hyper-V & CRI-O container runtime.