K8s operator that automatically names EC2 node instances for easier identification as K8s worker nodes.
The instance decorator solves the problem of identifying which EC2 instances belong to EKS K8s worker nodes by automatically updating their Name label. These can then be easily viewed/searched using the AWS EC2 web console or CLI.
Nodes are named using the pattern: {ClusterName}-eks-{NodeGroupName}-workerNode-{NodeIPAddress} ({Zone}, {OperatingSystem})
node-instance-decorator does not scope its activities to the namespace in which it is installed. You do not need to (and should not) install copies into different namespaces within a single cluster.
A single installation will manage node naming across an entire cluster.
node-instance-decorator consists of two components:
- An image containing the operator
- A Helm chart containing the K8s deployment
Both must be installed to use the operator within a cluster.
You will need:
- An AWS EKS cluster with at least one node group, and the ARN associated with this cluster.
- An AWS ECR repository into which to deploy the operator image, and the URI associated with this repository. The URI can be obtained from the AWS web console or CLI. Note that the AWS ECR repository does not need to be in the same region as the cluster.
- Local installations of golang, kubectl, aws-cli and helm. On Windows, these should be installed within WSL.
- Local installation of script-runner, if intending to use scripted IAM role/policy creation.
- A Kubernetes namespace into which to install the operator. This namespace must exist and be specified using the {NAMESPACE} parameter below.
- NOTE: The .kubeconfig associated with the WSL kubectl is NOT the same as the one used in Windows.
Verify cluster access within WSL using
kubectl config get-contexts
and, if necessary, add the required context using e.g.aws --region {aws.region} eks update-kubeconfig --name {cluster.name}
.
-
Create an IAM role and associated policy that grants permission for the relevant EC2 operations. Note the ARN of the role that is created.
- Script runner can automate this task:
script-runner scripts\nodeInstanceDecorator-prepare-config -p "cluster.arn:{CLUSTER_ARN}"
The value returned as
acmCertificateAgent.iam.serviceRole.arn
should be used as the ROLE_ARN below.- To do this manually, follow the instructions at https://docs.aws.amazon.com/eks/latest/userguide/specify-service-account-role.html using the trust policy template
scripts\_resources\nodeInstanceDecorator-iam-role-trust-policy.template
.
-
Build and push the operator image to ECR.
NOTE: This step is not required if the operator image has previously been deployed to your AWS account.
NOTE: On Windows, run this command within WSL.
make docker-build docker-push REPO_URI={REPOSITORY_URI}
-
Deploy the operator to the cluster.
NOTE: On Windows, run this command within WSL.
make deploy REPO_URI={REPOSITORY_URI} CLUSTER_ARN={CLUSTER_ARN} ROLE_ARN={SERVICE_ROLE_ARN} NAMESPACE={NAMESPACE}
{Namespace}
is the name of the k8s namespace into which the operator should be installed. This namespace must exist.Existing worker nodes should be processed and their corresponding EC2 instance names updated automatically. You can view these names using e.g. the AWS EC2 web console or CLI.
The operator is fully automated - no action is required once installation is complete.
Review the EC2 worker node instances to confirm that their names are being automatically configured.
You can modify how worker nodes are named by configuring the config.nameTemplate
key in values.yaml
.
A name template is a string containing one or more substitution parameters delimited by curly braces, e.g. {Zone}
.
Valid substitution parameters are: {Zone}, {ClusterName}, {NodeGroupName}, {NodeIPAddress}, {HostName}, {OperatingSystem}, {Architecture}
Remove the operator from the cluster using:
make undeploy CLUSTER_ARN={CLUSTER_ARN}
This project uses the Kubernetes Operator pattern
It was built from a kubebuilder project and subsequently modified to use Helm.
It uses Controllers which provides a reconcile function responsible for synchronizing resources until the desired state is reached on the cluster.
NOTE: Run make --help
for more information on all potential make
targets
More information can be found via the Kubebuilder Documentation
To debug the Helm chart and inspect the intermediate yaml file that is created run:
make helm-debug
Output will be generated as debug.yaml