This repo contains my Kubernetes demo in Azure.
- Azure Container Instance demo
- Building custom ACS cluster
- Using stateless app farms in mixed environment
- Stateful applications and StatefulSet with Persistent Volume
- Helm
- CI/CD with Jenkins and Helm
- Monitoring
Before we start with Kubernetes let see Azure Container Instances. This is top level resource in Azure so you don't have to create (and pay for) any VM, just create container directly and pay by second. In this demo we will deploy Microsoft SQL Server in Linux container.
az group create -n aci-group -l westeurope
az container create -n mssql -g aci-group --cpu 2 --memory 4 --ip-address public --port 1433 -l eastus --image microsoft/mssql-server-linux -e 'ACCEPT_EULA=Y' 'SA_PASSWORD=my(!)Password'
export sqlip=$(az container show -n mssql -g aci-group --query ipAddress.ip -o tsv)
watch az container logs -n mssql -g aci-group
sqlcmd -S $sqlip -U sa -P 'my(!)Password' -Q 'select name from sys.databases'
az container delete -n mssql -g aci-group -y
az group delete -n aci-group -y
Azure Container Instance is deployment, upgrade and scaling tool to get open source orchestrators up and running in Azure quickly. ACS as native embedded Azure offering (in GUI, CLI etc.) is production-grade version of open source acs-engine (deployment tool). In order to get latest features we will download acs-engine so we are able to tweek some of its parameters that are not yet available in version embedded in ACS.
wget https://github.com/Azure/acs-engine/releases/download/v0.8.0/acs-engine-v0.8.0-linux-amd64.zip
unzip acs-engine-v0.8.0-linux-amd64.zip
mv acs-engine-v0.8.0-linux-amd64/acs-engine .
We will build multiple clusters to show some additional options, but majority of this demo runs on first one.
Our first cluster will be hybrid Linux and Windows agents, with RBAC enabled and with support for Azure Managed Disks as persistent volumes in Kubernetes. Basic networking will be used with integration to Azure Load Balancer (for Kubernetes LodaBalancer Service). First, we need to create a new resource group as well as an Azure service principal. On Azure, acs-engine uses a Service Principal to interact with Azure Resource Manager (ARM).
./acs-engine generate myKubeACS.json
cd _output/myKubeACS/
az group create -n mykubeacs -l westeurope
az ad sp create-for-rbac -n "MyApp" --role contributor --scopes /subscriptions/{SubID}/resourceGroups/mykubeacs
az group deployment create --template-file azuredeploy.json --parameters @azuredeploy.parameters.json -g mykubeacs
scp azureuser@mykubeacs.westeurope.cloudapp.azure.com:.kube/config ~/.kube/config
In this cluster we will use Azure Networkin CNI plugin. This allows pods to use directly IP addresses from Azure VNET and allows for Azure Networking features to be used with pods - for example Network Security Groups or direct communication between pods in cluster and VMs in the same VNET.
./acs-engine generate myKubeAzureNet.json
cd _output/myKubeAzureNet/
az group create -n mykubeazurenet -l westeurope
az group deployment create --template-file azuredeploy.json --parameters @azuredeploy.parameters.json -g mykubeazurenet
scp azureuser@mykubeazurenet.westeurope.cloudapp.azure.com:.kube/config ~/.kube/config-azurenet
export vnet=$(az network vnet list -g mykubeacs --query [].name -o tsv)
az vm create -n myvm -g mykubeacs --admin-username azureuser --ssh-key-value "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFhm1FUhzt/9roX7SmT/dI+vkpyQVZp3Oo5HC23YkUVtpmTdHje5oBV0LMLBB1Q5oSNMCWiJpdfD4VxURC31yet4mQxX2DFYz8oEUh0Vpv+9YWwkEhyDy4AVmVKVoISo5rAsl3JLbcOkSqSO8FaEfO5KIIeJXB6yGI3UQOoL1owMR9STEnI2TGPZzvk/BdRE73gJxqqY0joyPSWOMAQ75Xr9ddWHul+v//hKjibFuQF9AFzaEwNbW5HxDsQj8gvdG/5d6mt66SfaY+UWkKldM4vRiZ1w11WlyxRJn5yZNTeOxIYU4WLrDtvlBklCMgB7oF0QfiqahauOEo6m5Di2Ex" --image UbuntuLTS --nsg "" --vnet-name $vnet --subnet k8s-subnet --public-ip-address-dns-name mykubeextvm --size Basic_A0
ssh azureuser@mykubeextvm.westeurope.cloudapp.azure.com
On your local machine, install kubectl (https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-via-curl). And run the GUI.
kubectl proxy
This set of demos focus on stateless applications like APIs or web frontend. We will deploy application, balance it internally and externally, do rolling upgrade, deploy both Linux and Windows containers and make sure they can access each other.
kubectl create -f deploymentWeb1.yaml
kubectl get deployments -w
kubectl get pods -o wide
kubectl create -f podUbuntu.yaml
kubectl create -f serviceWeb.yaml
kubectl get services
kubectl exec ubuntu -- curl -s myweb-service
kubectl create -f serviceWebExt.yaml
kubectl apply -f deploymentWeb2.yaml
kubectl create -f IIS.yaml
kubectl get service
kubectl exec ubuntu -- curl -s myiis-service-ext
kubectl delete -f serviceWebExt.yaml
kubectl delete -f serviceWeb.yaml
kubectl delete -f podUbuntu.yaml
kubectl delete -f deploymentWeb1.yaml
kubectl delete -f deploymentWeb2.yaml
kubectl delete -f IIS.yaml
Deployments in Kubernetes are great for stateless applications, but statful apps, eg. databases. might require different handling. For example we want to use persistent storage and make sure, that when pod fails, new is created mapped to the same persistent volume (so data are persisted). Also in stateful applications we want to keep identifiers like network (IP address, DNS) when pod fails and needs to be rescheduled. Also when multiple replicas are used we need to start them one by one, because aften first instance is going to be master and others slave (so we need to wait for first one to come up first). If we need to scale down, we want to do this from last instance (not to scale down by killing first instance which is likely to be master). More details can be found in documentation.
In this demo we will deploy single instance of PostgreSQL.
kubectl get storageclasses
kubectl create -f persistentVolumeClaim.yaml
kubectl get pvc
kubectl get pv
Make sure volume is visible in Azure.
Clean up.
kubectl delete -f persistentVolumeClaim.yaml
kubectl create -f statefulSetPVC.yaml
kubectl get pvc -w
kubectl get statefulset -w
kubectl get pods -w
kubectl logs postgresql-0
kubectl exec -ti postgresql-0 -- psql -Upostgres
CREATE TABLE mytable (
name varchar(50)
);
INSERT INTO mytable(name) VALUES ('Azure User');
SELECT * FROM mytable;
\q
kubectl delete pod postgresql-0
kubectl exec -ti postgresql-0 -- psql -Upostgres -c 'SELECT * FROM mytable;'
Destroy statefulset and pvc, keep pv
kubectl delete -f statefulSetPVC.yaml
Go to GUI and map IaaS Volume to VM, then mount it and show content.
ssh azureuser@mykubeextvm.westeurope.cloudapp.azure.com
ls /dev/sd*
sudo mkdir /data
sudo mount /dev/sdc /data
sudo ls -lh /data/pgdata/
sudo umount /dev/sdc
Detach in GUI
kubectl delete pvc postgresql-volume-claim-postgresql-0
az group create -n mykuberegistry -l westeurope
az acr create -g mykuberegistry -n mycontainers --sku Managed_Standard --admin-enabled true
az acr credential show -n mycontainers -g mykuberegistry
export acrpass=$(az acr credential show -n mycontainers -g mykuberegistry --query [passwords][0][0].value -o tsv)
docker.exe images
docker.exe tag tkubica/web:1 mycontainers.azurecr.io/web:1
docker.exe tag tkubica/web:2 mycontainers.azurecr.io/web:2
docker.exe tag tkubica/web:1 mycontainers.azurecr.io/private/web:1
docker.exe login -u mycontainers -p $acrpass mycontainers.azurecr.io
docker.exe push mycontainers.azurecr.io/web:1
docker.exe push mycontainers.azurecr.io/web:2
docker.exe push mycontainers.azurecr.io/private/web:1
az acr repository list -n mycontainers -o table
kubectl create -f podACR.yaml
kubectl delete -f clusterRoleBindingUser1.yaml
az group delete -n mykuberegistry -y --no-wait
docker.exe rmi mycontainers.azurecr.io/web:1
docker.exe rmi mycontainers.azurecr.io/web:2
docker.exe rmi mycontainers.azurecr.io/private/web:1
Helm is package manager for Kubernetes. It allows to put together all resources needed for application to run - deployments, services, statefulsets, variables.
cd ./helm
wget https://storage.googleapis.com/kubernetes-helm/helm-v2.6.1-linux-amd64.tar.gz
tar -zxvf helm-v2.6.1-linux-amd64.tar.gz
sudo cp linux-amd64/helm /usr/local/bin
helm
cd ./helm
helm install --name myblog -f values.yaml .
helm delete myblog --purge
In this demo we will see Jenkins deployed into Kubernetes via Helm and have Jenkins Agents spin up automatically as Pods.
CURRENT ISSUE: at the moment NodeSelector for agent does not seem to be delivered to Kubernetes cluster correctly. Since our cluster is hybrid (Linux and Windows) in order to work around it now we need to turn of Windows nodes.
helm install --name jenkins stable/jenkins -f jenkins-values.yaml
Use this as pipeline definition
podTemplate(label: 'mypod') {
node('mypod') {
stage('Do something nice') {
sh 'echo something nice'
}
}
}
Build project in Jenkins and watch containers to spin up and down.
kubectl get pods -o wide -w
Create Log Analytics account and gather workspace ID and key. Create Container Monitoring Solution.
Modify daemonSetOMS.yaml with your workspace ID and key.
kubectl create -f daemonSetOMS.yaml
kubectl create -f podUbuntu.yaml
kubectl exec -ti ubuntu -- logger My app has just logged something
Container performance example
Perf
| where ObjectName == "Container" and CounterName == "Disk Reads MB"
| summarize sum(CounterValue) by InstanceName, bin(TimeGenerated, 5m)
| render timechart
kubectl delete -f podUbuntu.yaml
kubectl delete -f daemonSetOMS.yaml