- minikube website
- minikube start
minikube --help
minikube start
kubectl help
kubectl set -h
kubectl config view
kubectl config get-contexts
kubectl config current-context
kubectl config use-context minikube
Figure 11.1. Kubernetes components of the Control Plane and the worker nodes from Kubernetes in Action.
See Kubernetes Components for a description of each component.
kubectl get componentstatuses
NOTE: Hard-coded addresses of scheduler and controller manager causes unhealthy ComponentStatus #96848
kubectl get all -n kube-system
ps -ef | grep kubelet
kubectl get nodes
kubectl describe node minikube
minikube dashboard
kubectl api-resources
kubectl api-resources --namespaced=false
kubectl api-resources --api-group=apps
kubectl api-resources --api-group=storage.k8s.io
kubectl explain namespaces
kubectl get namespaces
kubectl create namespace jeffs-space
kubectl get namespaces
kubectl delete namespace jeffs-space
kubectl get namespaces
kubectl get all --all-namespaces
- Declarative Management of Kubernetes Objects Using Configuration Files
- Understanding Kubernetes Objects
Explore:
-
Create a namespace with jeffs-ns.yaml
kubectl apply -f jeffs-ns.yaml kubectl get namespaces
-
Query the namespaces via the API:
- In a different console window:
kubectl proxy --port=8080
- In the main console window:
curl http://localhost:8080/api/v1/namespaces
- In a different console window:
-
To delete it using the config file:
kubectl delete -f jeffs-ns.yaml kubectl get namespaces
- Organizing Cluster Access Using kubeconfig Files
- Create pods in each namespace
- Configure Access to Multiple Clusters
Explore:
-
Get existing contexts:
kubectl config get-contexts
-
Update the current context to use a new namespace:
kubectl create namespace intro2k8s kubectl config set-context --current --namespace=intro2k8s kubectl config get-contexts kubectl config set-context --current --namespace=default kubectl config get-contexts
-
Create and delete a new context:
kubectl create namespace hello-kube kubectl config set-context hello-minikube --cluster=minikube --namespace=hello-kube --user minikube kubectl config get-contexts kubectl config current-context kubectl config use-context hello-minikube kubectl config delete-context hello-minikube kubectl config current-context kubectl config get-contexts kubectl get nodes kubectl config use-context minikube kubectl config get-contexts kubectl get namespaces kubectl delete namespace hello-kube kubectl config set-context --current --namespace=intro2k8s kubectl config get-contexts
- Pods
- Pod Lifecycle
- Resource Management for Pods and Containers
- Configure Liveness, Readiness and Startup Probes
kubectl explain pods
kubectl apply -f https://k8s.io/examples/pods/simple-pod.yaml
kubectl get pods
kubectl get pods -o wide
kubectl describe pods nginx
kubectl logs nginx
Explore:
-
Run commands:
kubectl exec nginx -- ls -l / kubectl exec nginx -- echo "Hi from the nginx pod!"
-
Shell into the nginx pod and look around:
kubectl exec -it nginx -- /bin/bash
-
To hit the nginx pod, we a shell inside the cluster:
kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
Run commands:
wget -q -O - <IP address of nginx pod> exit
-
Use port forwarding in a second terminal window:
kubectl port-forward -h kubectl port-forward nginx 8080:80
-
Visit http://localhost:8080/
-
Now that we hit the nginx web server check the logs again:
kubectl logs nginx
-
Clean up:
- Stop the port forwarding
- Remove the objects:
kubectl get all kubectl delete pod/nginx pod/busybox
- Storage documentation
- Volumes documentation
- Configure a Pod to Use a Volume for Storage
We will use init-container-pod.yaml for this demo. Run and connect:
kubectl apply -f init-container-pod.yaml
Explore:
-
Get information about our pod:
kubectl get all kubectl describe pod/demo-webapp
-
Use port forwarding in a second terminal window:
kubectl port-forward demo-webapp 8080:80
-
Access the application locally:
curl localhost:8080
-
Visit http://localhost:8080/ refreshing the page if you see the nginx banner from the previous demo.
-
Clean up:
- Stop the port forwarding
- Remove the objects:
kubectl delete pod/demo-webapp
Photo credit Saveur - England's 20-Year-Old 'Two Fat Ladies' is Still the Best Cooking Show Ever Made
- How Pods manage multiple containers
- The Logging Architecture page has a good example of this pattern.
We will use sidecar-container-pod.yaml for this demo. Run and connect:
kubectl apply -f sidecar-container-pod.yaml
Explore:
-
Get information about our pod:
kubectl describe pod/lottery-app
-
Use port forwarding in a second terminal window:
kubectl port-forward lottery-app 8080:80
-
Access the application locally:
curl localhost:8080
-
Visit http://localhost:8080/ refreshing the page if you see the nginx banner from the previous demo.
-
Clean up:
- Stop the port forwarding
- Remove the objects:
kubectl delete pod/lottery-app
Explore:
- View labels on existing objects:
kubectl get nodes --show-labels
- Create a new container and examine the labels:
kubectl apply -f sidecar-container-pod.yaml kubectl get all --show-labels
- Get help on labels:
kubectl label -h
- Label our new pod:
kubectl label pods lottery-app some-key=some-value kubectl get pods --show-labels
- Try and change the label:
kubectl label pods lottery-app some-key=some-other-value
- Fix the error:
kubectl label pods lottery-app some-key=some-other-value --overwrite=true kubectl get pods --show-labels
- Delete the label:
kubectl label pods lottery-app some-key- kubectl get pods --show-labels
Explore:
-
Add an environment label and some additional recommended labels in the config by applying devl-sidecar-container-pod.yaml. Note the additional entries in the
metadata.labels
section:Top of
devl-sidecar-container-pod.yaml
:apiVersion: v1 kind: Pod metadata: name: lottery-app labels: app: lottery-app environment: development app.kubernetes.io/name: lottery-app app.kubernetes.io/version: "1.1.0" app.kubernetes.io/component: webapp ...
Apply and check:
kubectl apply -f devl-sidecar-container-pod.yaml kubectl get pods --show-labels
-
Apply prod-sidecar-container-pod.yaml to create a second pod with a different label:
Top of
prod-sidecar-container-pod.yaml
:apiVersion: v1 kind: Pod metadata: name: prod-lottery-app labels: app: lottery-app environment: production app.kubernetes.io/name: lottery-app app.kubernetes.io/version: "1.0.0" app.kubernetes.io/component: webapp ...
Apply:
kubectl apply -f prod-sidecar-container-pod.yaml kubectl get pods --show-labels
-
Query for all pods:
kubectl get pods -l app=lottery-app
-
Query for just production:
kubectl get pods -l environment=production
-
Query for just development:
kubectl get pods -l environment=development
-
Query for pods in a list:
kubectl get pods -l 'environment in (production, qa)'
-
Clean up:
kubectl delete pods -l app=lottery-app
Examine and deploy two demo pods
kubectl apply -f service-demo-pod-1.yaml
kubectl apply -f service-demo-pod-2.yaml
kubectl get all --show-labels
-
Select each using labels:
kubectl get pods -l app=service-demo-app kubectl get pods -l app.kubernetes.io/version=1.0.0,app=service-demo-app kubectl get pods -l app.kubernetes.io/version=2.0.0,app=service-demo-app
-
Examine the output of version 1:
kubectl exec `kubectl get pods -l app.kubernetes.io/version=1.0.0,app=service-demo-app -A -o jsonpath="{.items[0].metadata.name}"` -- curl -vs http://localhost
- Examine the output of version 2:
kubectl exec `kubectl get pods -l app.kubernetes.io/version=2.0.0,app=service-demo-app -A -o jsonpath="{.items[0].metadata.name}"` -- curl -vs http://localhost
Leave the pods running for the next section.
- Services, Load Balancing, and Networking
- Connecting Applications with Services
- Service
- Access Services Running on Clusters
- Minikube Accessing apps
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
-
Apply service-demo-cluster-ip-svc.yaml and check:
kubectl apply -f service-demo-cluster-ip-svc.yaml kubectl get all -o wide --show-labels
In another terminal window run:
kubectl proxy
then access the service via the exposed API endpoint: http://127.0.0.1:8001/api/v1/namespaces/intro2k8s/services/demo-webapp-service/proxy/
NOTE: Stop the kubectl proxy
process but leave the pods running for the next section.
NOTE: this is the same config as the service above except for the type on the last line.
-
Use a specific NodePort (3007) by applying service-demo-nodeport-svc.yaml. NOTE: this is the same config as the service above except for the nodePort on the second to last line.
-
The bottom of
service-demo-nodeport-svc.yaml
:... spec: selector: app: service-demo-app app.kubernetes.io/name: service-demo-app ports: - protocol: TCP port: 80 type: NodePort
-
Apply and run:
kubectl apply -f service-demo-nodeport-svc.yaml kubectl get all -o wide --show-labels minikube service list minikube service --url demo-webapp-service -n intro2k8s
-
Use a specific NodePort (3007) by applying service-demo-nodeport-30007-svc.yaml. NOTE: this is the same config as the service above except for the nodePort on the second to last line.
The bottom of
service-demo-nodeport-30007-svc.yaml
:... spec: selector: app: service-demo-app app.kubernetes.io/name: service-demo-app ports: - protocol: TCP port: 80 nodePort: 30007 type: NodePort
-
Apply the update:
kubectl apply -f service-demo-nodeport-30007-svc.yaml minikube service --url demo-webapp-service -n intro2k8s
-
Clean up:
kubectl get all kubectl delete service/demo-webapp-service kubectl delete pods -l app=service-demo-app kubectl get all
- A file used to configure access to a cluster is called a kubeconfig file.
- By default,
kubectl
looks for a file namedconfig
in the$HOME/.kube
directory. - You can specify other kubeconfig files by setting the
KUBECONFIG
environment variable or using the--kubeconfig
flag. - The kubeconfig has three parts: clusters, users, and contexts.
- See Configure Access to Multiple Clusters for specifics on how to add clusters to your kubeconfig file.
- Contexts specify the cluster, user, and namespace used when performing operations.
- You will need at least one context per cluster, but you can specify more than one if you want a quick way to switch between users or namespaces within a cluster.
- See Organizing Cluster Access Using kubeconfig Files for more.
- Namespaces provide a mechanism for isolating groups of resources within a single cluster.
- Names of resources need to be unique within a namespace, but not across namespaces.
- Namespace-based scoping is applicable only for namespaced objects (e.g., Deployments, Services, etc.) and not for cluster-wide objects (e.g., StorageClass, Nodes, PersistentVolumes, etc.).
- Namespaces cannot be nested inside one another, and each Kubernetes resource can only be in one namespace.
- See Share a Cluster with Namespaces for more on creating, deleting, and using namespaces.
- The core of Kubernetes' control plane is the API server.
- The API server exposes an HTTP API that lets end-users, different parts of your cluster, and external components communicate with one another.
- The Kubernetes API lets you query and manipulate the state of API objects in Kubernetes (for example, Pods, Namespaces, ConfigMaps, and Events).
- You can perform most operations using the kubectl command-line interface or other command-line tools, such as kubeadm, which in turn use the API. However, you can also access the API directly using REST calls.
- Consider using one of the client libraries if you are writing an application using the Kubernetes API.
- See The Kubernetes API for more.
- Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.
- Pods are designed to support multiple cooperating processes (as containers) that form a cohesive service unit.
- The containers in a Pod are automatically co-located and co-scheduled on the same physical or virtual machine in the cluster.
- The containers can share resources and dependencies, communicate with one another, and coordinate when and how they are terminated.
- Only co-locate containers in the same Pod if they need to share resources and should be scaled together.
- Pods natively provide two kinds of shared resources for their constituent containers: networking and storage.
- One rarely creates individual Pods directly in Kubernetes because Pods are designed as relatively ephemeral, disposable entities.
- When a Pod gets created, it's scheduled to run on a Node in your cluster where it remains until it finishes execution, is deleted, is evicted for lack of resources, or the node fails.
- See Pods for more.
- A Pod can specify a set of shared storage volumes that are available to all containers in the Pod.
- We used an Init Container to download web content from GitHub and save it to an emptyDir shared volume.
- In the lottery demo, we used a Sidecar container to update a volume shared with a web server dynamically.
- See Storage for more information on how Kubernetes implements shared storage and makes it available to Pods.
- Labels are key/value pairs attached to objects, such as Pods.
- Use labels to identify, organize, and group objects in a loosely coupled way.
- Labels can be attached to objects at creation time and subsequently added and modified at any time.
- Each object can have a set of key/value labels defined.
- Each key must be unique for a given object.
- Labels do not provide uniqueness. In general, we expect many objects to carry the same label(s).
- See Labels and Selectors for more.
- The label selector is the core grouping primitive in Kubernetes.
- Clients, users, and objects use selectors to identify a set of objects.
- The API currently supports two types of selectors: equality-based and set-based.
- Multiple labels can be in scope for a selector. If so, all must be satisfied to match.
- See Labels and Selectors for more.
- A Service is an abstraction that defines a logical set of Pods and a policy to access them (sometimes this pattern is called a micro-service).
- A selector usually determines the set of Pods targeted by a Service.
- While the actual Pods that compose the backend set may change, the frontend clients should not need to be aware of that, nor should they need to keep track of the group of backends themselves. The Service abstraction enables this decoupling.
- In class we explored two service types,
ClusterIP
and NodePort. - Key features of
ClusterIP
:- It exposes the Service on a cluster-internal IP.
- Using a
ClusterIP
makes the Service only reachable from within the cluster. - This is the default
ServiceType
.
- Key features of
NodePort
:- It exposes the Service on each Node's IP at a static port (the NodePort).
- The Service is exposed outside the cluster through the NodePort Service at
<NodeIP>:<NodePort>
. - Kubernetes automatically creates a corresponding ClusterIP Service to internally route traffic from the externally exposed NodePort Service.
- See Services for more.