Register now: The Zen of Total Platform Engineering Cost Management

Crucial Guide to Understanding Amazon Elastic Kubernetes Service Label Values

In the race to achieve digital transformation, companies shifted towards containerizing their existing environments to improve scalability, availability, and performance. As companies scale up their operations, there could be thousands of microservice-based containers running, making it very difficult to manage and deploy containers effectively. This scalability challenge prompted the birth of container orchestration tools like Kubernetes, an open-source, extensible, and widely-used platform for managing containers at scale.

Kubernetes is meant for microservice-based architectures. Because of its portable nature, you can deploy Kubernetes to public, private, or hybrid cloud environments. Out-of-the-box functions such as auto-scaling, load balancing, traffic routing, health checks, and self-sealing promote faster deployment and an agile development environment. For this reason, most enterprises adapt to Kubernetes-based workflows to take advantage of these features.

As your applications scale up with multiple containers, you need to manage them efficiently to be able to identify and track your Kubernetes resources. You can assign Kubernetes labels to objects, such as pods and containers. Implementing tagging best practices helps management with both visibility and cost accountability, as well as helps DevOps engineers manage thousands of containers. In this blog, we will dive deeper into Kubernetes tags, including how to apply them, how they translate to Amazon Elastic Kubernetes Service (EKS), and best practices to get the most out of your labels.

Kubernetes Terminology

To run Kubernetes, you should understand some of its terminology, especially namespaces and pods. A namespace essentially partitions the entire working space — known as the cluster — for different users or uses. A pod is the smallest object in Kubernetes architecture, typically a set of containers running on your Kubernetes cluster.

When you have hundreds of containers, you need to identify each resource within the Kubernetes architecture by attaching labels. Labels are simply a key/value pair that you assign to various Kubernetes objects, similar to tags in Docker.

Many objects will carry the same label and are not inherently unique, so they require an additional layer of filtering with label selectors. With label selectors, the user can identify a set of objects with either equality-based or set-based selectors, depending on if the object either has a key and value or a set of values.

Kubernetes Labels

Labels are key-value pairs. Generally, a valid label must be less than 63 characters, must start and end with alphanumeric characters, and can only contain basic special characters.

We can label nearly anything in Kubernetes, such as deployments, services, nodes, pods, and more. For example, consider the YAML file for a pod that has two labels: environments: dev and app: webapp.

apiVersion: v1
kind: Pod
metadata:
  name: label-demo
  labels:
	environment: dev
	app: webapp
spec:
  containers:
  - name: node
	image: node:latest
	ports:
	- containerPort: 80

You can specify labels on object creation and modify them in real-time. You can also assign a label to Kubernetes pods using the command:

Kubectl label pods label-demo environment=dev

Kubernetes Label Selectors

Equality-based selectors filter a label by key-value pairs using =, ==, and !=. For example:

  • environment = dev
  • app != backend

The first expression selects all the resources with an environment key equal to dev and the second expression selects objects whose label app is not backend

Combining label and equality-based label selectors specifies resource selection when initializing the container.

apiVersion: v1
kind: Pod
metadata:
  name: label-demo
spec:
  containers:
	- name: label-demo
  	image: node:latest
  nodeSelector:
	environment: dev

Set-based selectors are similar to logical operators. The three kinds of operators are: in, notin, and exists. For example:

  • environment in ( dev, live )
  • app notin ( backend, database )

The first example accepts all resources whose environment label is either dev or live. The second does the opposite, selecting all resources with an application label value not equal to backend or database.

You can implement both set-based and equality-based selectors in a single operation. For example:

kubectl get pods -l environment=dev, app in ( webapp, frontend )

This command selects all Kubernetes resources with the environment label value dev and with the app value webapp or frontend.

These Kubernetes labels and selectors help filter our search criteria, which helps us perform DevOps operations efficiently. But how do they translate to managed Kubernetes platforms like Amazon Elastic Kubernetes Service (EKS)?

Amazon EKS Tags

Amazon EKS has a tagging function that performs similar operations, called tags instead of labels. These tags also consist of a key and an optional value parameter, mandatory in Kubernetes. AWS provides two ways of adding and deleting tags. This can be done from the resources’ pages or by using a command-line interface (CLI), application programming interface (API), or eksctl.

Tags help classify your resources and track down costs associated with each resource. In AWS, with the help of AWS Identity and Access Management (IAM), you can also control who can assign and remove tags from AWS EKS resources.

Best Practices for Kubernetes Labels

There is no right or wrong way to use Kubernetes labels. However, some best practices help to organize containers easier. While working with various managed Kubernetes environments, keep in mind that every Kubernetes provider may have its own set of rules regarding the Kubernetes syntax. For example, when defining tags in EKS, it is not ideal to use AWS: or aws: as they are reserved for AWS use.

The second rule of thumb is always giving resources a valid label. This might create overhead during creation, but it helps tremendously in the long run. It’s easier to document or remember a name than a random resource ID. There’s nothing wrong functionally with giving resources irrelevant names, but following a naming convention significantly reduces the time it takes for you or other team members to identify the resource. If you are working in a large enterprise managing thousands of containers, pods, and nodes across your infrastructure, you should implement company-wide naming conventions.

Choosing a Cloud Cost Optimization Management Solution for Kubernetes 

There are many solutions in the market that have either a focus on cost allocation for AWS tags in EC2 instances or with Kubernetes, but very few provide the insight both simultaneously. Yotascale takes the shared resources and allocates them not just by tags in AWS but also by Kubernetes labels, splitting up the cost of a cluster based on the utilization metrics from the pod which can then be allocated back to the specific team deploying them. Yotascale also shows your EC2 resources running on your Kubernetes clusters with a breakdown by pod, including the full picture of the cost of service and application by an individual engineering team.

Next Steps

We now understand how Kubernetes labels work and how that functionality translates into AWS EKS. You can also use labels to track Kubernetes resource use, pinpointing costs across Kubernetes infrastructure. This bird’s-eye view of systematically labeled resources helps you understand how teams and projects use Kubernetes resources, helping you analyze project cost and even identify orphaned resources.

Instead of using your labels to track projects and costs manually, you can use advanced cloud cost management services like Yotascale. Yotascale’s tools help you leverage tags and labels for more accurate visibility into your cloud resource usage. Try Yotascale’s free trial or request a demo today.