This is a architectural description of the smad-deploy-azure
Folder | Description | Depends on |
---|---|---|
./00_tfstate_storage | Creates resource group for terraform state file | |
./01_storage_rg | Creates separate resource group for persistent data needs | |
./02_deployHono | Creates Hono k8s cluster and services | |
.02_deployHono/modules | Modules used by the script. | |
.../k8s | Module for creating kubernetes cluster to Azure (AKS) | |
.../ambassador | Handles deployment of Ambassador via Helm to k8s cluster | k8s |
.../hono | Handles deployment of Hono via Helm to k8s cluster | k8s |
../influxdb | Module that handles deployment of Influxdb to k8s cluster. Holds all the information to set up database for prometheus metrics | k8s |
.../jaeger | Handles deployment of Jaeger via Helm to k8s cluster | k8s |
.../kafka | Handles deployment of Kafka cluster via Helm to k8s cluster | k8s |
.../kube_prometheus_stack | Handles deployment of kube-prometheus-stack via Helm to k8s cluster. | k8s |
.../mongodb | Handles deployment of MongoDB via Helm to k8s cluster. | k8s |
.../persistent_storage | Deploys persistent volumes and persistent volume claims. | k8s |
./03_container_registry | Creates ACR for k8s cluster. Currently not used |
This module is to be ran separately, because it creates needed Terraform State files and storage account and container to Azure
Hold variables for naming resources created by this module.
Output values four resource group, storage account and storate container.
This modules creates separate resource group for persistent volume claim.
Establish azurerm backend with previously set naming for tfstate files, and create resource group named storage-resource-group"
Output value of created resource group's id.
Used for deploying modules and setting up proper environment for kubernetes and helm providers, and azurerm backend.
Project name is prefixed with Terraform Workspace name.
Uses module specified in ./modules/k8s/
folder for deploying Kubernetes cluster under k8test-rg resource group
. Node count of cluster is controlled by k8s_agent_count
variable, where node count for default terraform workspace is 3, and non-default workspace is 2. use_separate_storage_rg
variable controls whether separate resource group for storage purposes is created
Role assingment for separate resource group. Gets scope value from datamodule. Created only when use_separate_storage_rg
is true.
Uses module specified in ./modules/hono/
folder for deploying Eclipse Hono on previously created Kubernetes cluster. Custom MongoDB username and password could be supplied to services, otherwise default is used.
Kubernetes and Helm providers are configured with outputs acquired from created k8s cluster module.
Adds bitnami helm charts that bootstrap a Influxdb deployment on the k8s cluster using the Helm package manager.
Adds bitnami helm charts that bootstrap a MongoDB deployment on the k8s cluster using the Helm package manager.
Adds helm charts that bootstrap a kube_prometheus_stack deployment on the k8s cluster using the Helm package manager. Includes Prometheus and Grafana.
Adds helm charts that bootstrap an Ambassador deployment on the k8s cluster using the Helm package manager.
Adds helm charts that bootstrap a Jaeger deployment on the k8s cluster using the Helm package manager.
Adds helm charts that bootstrap a Kafka cluster on the k8s cluster using the Helm package manager.
Used to specify project name. No need to change because Terraform Workspace prefix can create unique project names.
Node count for clusters using "default" Terraform Workspace
Node count for clusters using non-default Terraform Workspace. Used for test deployments.
MongoDB username for deployed MongoDB instance. Can be specified with .tfvars
MongoDB password for deployed MongoDB instance. Can be specified with .tfvars
Outputs for kube config files and path's for it
Creates resource group for Kubernetes cluster with project name and resource_group_name suffix specified in variables.
Log analytics workspace is also created with ContainerInsights name. By default Log Analytics are disabled. Log Analytics can be enabled by changing variable "enable_log_analytics" into true.
Kubernetes cluster is created with resource "azurerm_kubernetes_cluster" "k8s_cluster"
under previously created resource group.
Creates storage class with reclaim policy of retain. Resource group is defined with the separate_storage_rg
variable, and if it is false then null
value is used. This means that when resource group is null
then resource group is created under the same resource group where k8s_cluster is.
Creates persistent volume claim for MongoDB.
Creates persistent volume claim for InfluxDB.
Contains variables for naming all the resources and specifying node count. Project name, k8s_agent_count and resource_group_name_suffix variables can be set from root main.tf
Output values acquired from k8s_clusters kube config. THese include client keys, cerficates, usernames, passwords and hosts for k8s cluster.
Depends on k8s module
This module handles deploying of Hono. Uses Helm for deployment.
Direct URL to chart's source is provided one line above a helm_release
resource. Usually that URL is where one can see how the chart can be configured.
The chart values can be set either by giving the values in a .yaml file or by including a set
block within the helm_release
block.
In the current script, the chart values are mostly set by giving the values via the included .yaml file.
If the chart source page doesn't provide list of settable values, the values can also be shown by adding the repo and running helm show values
:
$ helm repo add <choose_repo_name> <repo_url>
$ helm repo update
$ helm show values <chosen_repo_name>/<chart_name>
Deploys Hono from Helm Chart. Uses hono_values.yaml
for configuration and sensitive values from MongoDB are acquired from variables.tf. Deploys only after kube-prometheus-stack has succesfully deployed.
Holds information related to mondogb username and passwords. Can be configured independetly otherwise defaults used.
Configures to use separately deployed MongoDB for Hono device registry. Other services provided by Hono Helm chart are disabled. Smad-deploy-azure uses separately deployed and configured services.
Depends on k8s module
This module deploys Ambassador. Uses Helm for deployment.
Depends on k8s module
This module handles all the aspects of deploying MongoDB. Uses Helm for deployment.
Values used by service are supplied by values.yaml
-file. Sensitive values such as usernames and passwords acquired from variables.tf
Holds information related to mondogb username and passwords. Can be configured independetly otherwise defaults used.
Configures persistence volumeclaim for MongoDB, and enables metrics and statefuls set.
Depends on k8s module
This module handles all the aspects of deploying Prometheus and Grafana. Uses Helm for deployment.
Creates kubernetes config map and supplies preconfigured Grafana dashboards via .json
Deploys kube-prometheus-stack which consists of Prometheus, kube metrics and grafana. Gets values from prom_values.yaml
Configures grafana, prometheus as LoadBalancers, and configures scrape configs for Hono.
Includes grafana dashboards information
Depends on k8s module
This module deploys influxdb as a long term storage for prometheus monitoring data.
Adds bitnami helm charts that bootstraps a Influxdb deployment on a Kubernetes cluster using the Helm package manager and creates a database monitoring_data for monitoring data.
Values used for deploying Influx.
Depends on k8s module
This module handles all the aspects of deploying Jaeger. Uses Helm for deployment.
Deploys jaeger-operator, and is configured with values from jaeger_values.yaml
Jaeger is enabled with simple metadata.
NOT USED
Creates Azure Container registry in the same resource group as k8s modules.
Assigns acrpull role for k8s cluster
Variables for naming resources.
Output values for ACR. Containing id, login url, username and password.