Introduction to Kubernetes and Container Management

Akshaya Balaji
6 min readDec 26, 2020
Image Source

What is Kubernetes? And why do we need it?

To understand Kubernetes, we need to first look at how the methods of deploying applications have changed over time.

Traditional Deployment

Since the advent of different software and applications, deployment of the software has always been a challenge. Initially, the software and applications were deployed directly on the computer system (or physical servers). Later on, this method did not work out for growing organizations due to clashes in resource allocation for multiple applications deployed on the same system. Increasing expenses to accommodate more resource demands also became a great challenge. This brought up the idea of running each application in an isolated environment.

Virtualization

The requirement of isolation led to the concept of virtualization where each application was deployed on a virtual machine, which is essentially an OS running on a hypervisor. Virtualization allows us to run multiple virtual machines on a single computer system’s CPU(s). Running each application on a separate virtual machine ensured that there would be no clashes for resources and also enhanced the security of the data handled by each application. Virtualization also provided better scalability as compared to the expensive method of adding more physical servers.

Containerization

Even virtualization had its set of issues. Every VM is a complete version of the OS, which means there are many parts of the OS which might be consuming resources, but not really contributing to the single application running on it. VMs are also bulky and cannot easily migrate or be replicated. This brought up the concept of containerization, where lighter OS called containers were developed. Containers are lighter versions of VMs where they have only a stripped-down part of the OS which is important to run the single application and its related microservices. Since containers are light-weight, they can be easily created and destroyed.

Since most applications are now being tested and deployed on containers, we now face the question of managing these containers. This process of managing and maintaining containers is called container orchestration. Manually orchestrating containers is possible if the number of containers being handled is small, but as the number of containers and the sensitivity of the applications running on them increases, we need to look for dedicated tools. One such tool is Kubernetes.

Image Source

Development of Kubernetes and its features

Kubernetes was originally developed by Google and is directly linked to their project Borg, and the project was open-sourced in 2014. Now, it is being maintained by the Cloud Native Computing Foundation. As defined by the official docs,

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.

Kubernetes does not directly manage the containers. Rather, it encloses them into entities called Pods and manages the containers indirectly, by working on the Pods. Kubernetes has several features that make it very useful for container management:

(i) Automatic handling of application updates and health

Kubernetes is capable of making changes to an application and then ensure that the same changes are applied (rolled-out) to all the containers on which the application is running. While making the changes, it also monitors the application and allows options for reverting to the previous state of the application if needed.

(ii) Load-Balancing

When a service is running on multiple pods, it is not feasible for the service provider to give the client multiple IPs so that they can access the service by any one of them. Kubernetes prevents this by setting up load-balancers which can direct client requests to any of the pods that can handle the request from a single common IP (or DNS).

(iii) Storage Orchestration

Kubernetes allows us to mount storage systems either locally or from external sources like public cloud providers (AWS, GCP, etc.) and network storage systems (NFS, iSCSI, Ceph, etc.).

(iv) Secret and configuration management

Kubernetes allows us to deploy and update secrets and application configuration without rebuilding the container image and without exposing secrets in the stack configuration.

(v) Batch execution

In addition to services, Kubernetes can manage batch and CI workloads, and replace containers that fail.

(vi) Horizontal scaling

Kubernetes allows us to scale your application up and down with a simple command, with a UI, or automatically based on CPU usage.

(vii) Self-healing

Kubernetes can restart containers that fail, replace, and reschedule containers when nodes die, kill containers that don’t respond to your user-defined health check, and only expose them to clients when they are fully configured to handle client requests.

Where has Kubernetes been used?

Nokia

Nokia’s core business is building telecom networks end-to-end; its main products are related to the infrastructure, such as antennas, switching equipment, and routing equipment. Their main objective was to run a given product on different infrastructures like bare metal, virtual machines, and even cloud-based instances, without significantly changing the product. When Nokia began moving to cloud-native technologies, Kubernetes helped in managing and stably scaling the architecture.

BlackRock

BlackRock, one of the world’s largest asset managers, faced the challenge of providing more dynamic access to resources in the data science division. However, their greater objective was to upgrade their existing environments for complex tools and software to make them easier to build, destroy, and scale as required. Kubernetes was a viable solution as they could draw on their experience with Docker, a containerization tool. Michael Francis, a Managing Director in BlackRock’s Product Group, said

“…We need to gain experience of running the system in production, we need to understand failure modes and how best to manage operational issues. What’s interesting is that just having this technology there is changing the way our developers are starting to think about their future development.”

Spotify

Spotify had adopted the practice of using microservices and Docker and had initially worked with VMs managed by the container orchestration system called Helios. But their growing reach indicated that it would be more efficient to work with a tool supported by a bigger community. Kubernetes was the tool they chose to work with and it allowed them to slowly migrate while still working on Helios in parallel. Kubernetes greatly benefitted them through autoscaling and has also improved CPU utilization by two-to-three fold. Site Reliability Engineer James Wen. said,

“Before, teams would have to wait for an hour to create a new service and get an operational host to run it in production, but with Kubernetes, they can do that on the order of seconds and minutes.”

Pinterest

Pinterest had 100s of microservices, multi-layer infrastructures, tools, and platforms. In 2016, they launched a roadmap to minimize the time to develop ideas and push them towards production and began moving services to Docker. To manage the containers efficiently, they chose the container orchestration tool Kubernetes. Kubernetes also brought them added benefits like on-demand scaling, new failover policies, etc.

What’s the latest development in Kubernetes?

Kubernetes is one of the most popular tools used for container orchestration. While Kubernetes was originally built to support Docker containers, recently this support has been deprecated specifically for the Docker Engine from Kubernetes 1.20 onwards. However, we have other containerization tools like CRI-O which can still be managed by Kubernetes.

Conclusion

Kubernetes is one of the many tools available for container orchestration, but its open-source nature and consistent support from the community have placed it among some of the most vital tools that we need in reducing the time and effort spent in deploying and maintaining new applications in a robust and easily scalable manner.

--

--

Akshaya Balaji

MEng EECS | ML, AI and Data Science Enthusiast | Avid Reader | Keen to explore different domains in Computer Science