Everything You Should Know About How Kubernetes Works

Understanding Kubernetes: Core Concepts and Practical Use Cases

Everything You Should Know About How Kubernetes Works


Kubernetes is an open-source orchestration software for deploying, managing, and scaling containers. Modern applications are increasingly built using containers, which are microservices packaged with their dependencies and configurations. Kubernetes (pronounced "koo-ber-net-ees") is open-source software for deploying and managing those containers at scale and it’s also the Greek word for helmsmen of a ship or pilot. Build, deliver, and scale containerized apps faster with Kubernetes sometimes referred to as "k8s" or "k-eights".

What is container orchestration?

Container orchestration is the automation of much of the operational effort required to run containerized workloads and services. This includes a wide range of things software teams need to manage a container’s lifecycle, including provisioning, deployment, scaling (up and down), networking, load balancing, and more.

While containers are a very effective way of running your applications, there is one major challenge associated with them which is container orchestration. They need a solution to run in a production environment. The idea is to continuously monitor and manage these containers to ensure continuity of services. Some of the checks involved with this management include ensuring that the system is fine and is not experiencing any downtime. This process of running and scaling containers in the production environment becomes quite a tricky option.

Kubernetes was introduced as a solution to this particular complex problem. It helps you manage the distributed container system effectively, running and scaling the containers while taking care of downtime easily. It provides advantages based on load balancing, storage orchestration, configuration management, automated rollouts, etc...

In this article, we will discuss the core functionality of Kubernetes and how effectively it can be used as a service for orchestrating and managing many containers.

Why do you need to understand Kubernetes?

Kubernetes or K8S provides several key features that allow us to run immutable infrastructure. Containers can be killed, replaced, and self-heal automatically, and the new container gets access to those support volumes, secrets, configurations, etc., that make it function.

The following features of K8S help to manage containers and also make your containerized application scale efficiently:

  • Service discovery and load balancing - Containers get their own IP so you can put a set of containers behind a single DNS name for load balancing.

  • Storage orchestration - Automatically mount local or public cloud or network storage.

  • Secret and configuration management - Create and update secrets and configs without rebuilding your image.

  • Self-healing - The platform heals many problems like restarting failed containers, replacing and rescheduling containers as nodes die, killing containers that don’t respond to your user-defined health check, and waiting to advertise containers to clients until they’re ready.

  • Batch execution - Manage your batch and Continuous Integration workloads and replace failed containers.

  • Automatic bin packing - Automatically schedules containers based on resource requirements and other constraints.

  • Automated rollouts and rollbacks - Roll out changes that monitor the health of your application thereby ensuring all instances don't go down simultaneously. If something goes wrong, K8S automatically rolls back the change.

  • Horizontal scaling - Scale your application by launching more instances of your application as needed from the command line or UI.

Kubernetes Core Components

Kubernetes gives you the platform to schedule and run containers on clusters of physical or virtual machines. Kubernetes architecture divides a cluster into components that work together to maintain the cluster's defined state.

Kubernetes Architecture


A Kubernetes cluster is a set of node machines for running containerized applications. You can visualize a Kubernetes cluster as two parts:

  1. The control plane

  2. The compute machines or nodes.

Each node is its own Linux environment and could be either a physical or virtual machine. Each node runs pods, which are made up of containers.

Control Plane

The control plane's components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events (for example, starting up a new pod when a deployment's replicas field is unsatisfied). Below are the components of the Control Plane.

API Server

The API server is a component of the Kubernetes control plane that exposes the Kubernetes API. The API server is the front end for the Kubernetes control plane.

The main implementation of a Kubernetes API server is kube-apiserver. kube-apiserver is designed to scale horizontally-that is, it scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances.


etcd is the consistent and highly available key-value store used as Kubernetes backing store for all cluster data. If your Kubernetes cluster uses etcd as its backing store, make sure you have a backup plan for those data.

You can find in-depth information about etcd in the official documentation.

Kube Controller Manager

This Control plane component runs the controller processes. Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process. Some types of these controllers are:

  • Node controller - Responsible for noticing and responding when nodes go down.

  • Job controller - Watches for Job objects that represent one-off tasks, then creates Pods to run those tasks to completion.

  • Endpoints controller - Populates the Endpoints object (that is, joins Services & Pods).

  • Service Account & Token controllers - Create default accounts and API access tokens for new namespaces.

Cloud Controller Manager

This control plane component embeds cloud-specific control logic. The cloud controller manager lets you link your cluster into your cloud provider's API, and separates out the components that interact with that cloud platform from components that only interact with your cluster.

The cloud-controller-manager only runs controllers that are specific to your cloud provider. If you are running Kubernetes on your own premises, or in a learning environment inside your own PC, the cluster does not have a cloud controller manager. The following controllers can have cloud provider dependencies:

  • Node controller - This is used for checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding

  • Route controller - This is used for setting up routes in the underlying cloud infrastructure

  • Service controller - This is used for creating, updating, and deleting cloud provider load balancers


Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment


An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod. It ensures that the containers described are running and healthy.


It maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.

Deploying an Application

A Kubernetes deployment is a resource object in Kubernetes that provides declarative updates to applications. A deployment allows you to describe an application’s life cycle, such as which images to use for the app, the number of pods there should be, and the way in which they should be updated.

The process of manually updating containerized applications can be time-consuming and tedious. A Kubernetes deployment makes this process automated and repeatable. Deployments are entirely managed by the Kubernetes backend, and the whole update process is performed on the server-side without client interaction.

The Kubernetes deployment object lets you:

  • Deploy a replica set or pod

  • Update pods and replica sets

  • Rollback to previous deployment versions

  • Scale a deployment

  • Pause or continue a deployment

Access Control

Controlling Access to Kubernetes API

Users access the Kubernetes API using kubectl, client libraries, or by making REST requests. Both human users and Kubernetes service accounts can be authorized for API access. When a request reaches the API, it goes through the following stages


The input to the authentication step is the entire HTTP request; however, it typically examines the headers and/or client certificates. Authentication modules include client certificates, passwords, and plain tokens, bootstrap tokens, and JSON Web Tokens. Multiple authentication modules can be specified, in which case each one is tried in sequence until one of them succeeds.

If the request cannot be authenticated, it is rejected with HTTP status code 401. Otherwise, the user is authenticated as a specific username for further stages.


A request must include the username of the requester, the requested action, and the object affected by the action. The request is authorized if an existing policy declares that the user has permission to complete the requested action.

Example :

  • If Bob makes a request to write (create or update) to the objects in the development namespace, his authorization is denied if his role is not allowed in the update/create policy.

  • If Bob makes a request to read (get) objects in a different namespace such as production, then his authorization is denied.

Kubernetes supports multiple authorization modules, such as ABAC mode, RBAC (Role-Based Access Control) Mode, and Webhook mode. If all of the modules deny the request, then the request is denied (HTTP status code 403).

Admission control

Admission Control modules are software modules that can modify or reject requests. In addition to the attributes available to Authorization modules, Admission Control modules can access the contents of the object that is being created or modified. Unlike Authentication and Authorization modules, if any admission controller module rejects, then the request is immediately rejected.

Secret Management

Kubernetes Secrets are secure objects which store sensitive data, such as passwords, OAuth tokens, and SSH keys, etc. with encryption in your clusters. Using Secrets gives you more flexibility in a Pod Life cycle definition and control over how sensitive data is used. It reduces the risk of exposing the data to unauthorized users.

Kubernetes etcd is the consistent and highly available key-value store used for storing Kubernetes secrets. Below are few characteristics of Kubernetes Secret

  • Secrets are namespaced objects.

  • Secrets can be mounted as data volumes or environment variables to be used by a container in a pod.

  • Secret data is stored in tmpfs in nodes

  • API server stores secrets as plain text in etcd

  • A per-secret size limit of 1MB


kube-scheduler is the control plane component that watches for newly created Pods with no assigned node, and selects a node for them to run on. The Scheduler is responsible for the scheduling of containers across the nodes in the cluster.

Factors taken into account for scheduling decisions include

  • Individual and collective resource requirements

  • Hardware/Software/policy constraints like CPU, Memory, etc...

  • Data locality

  • Inter-workload interference


Minikube is a tool that lets you run Kubernetes locally. minikube runs a single-node Kubernetes cluster on your personal computer (including Windows, macOS, and Linux PCs) so that you can try out Kubernetes, or for daily development work.

You can follow the official Get Started! guide if you want to install this tool. Once minikube is installed you are now ready to run a virtualized single-node cluster on your local machine.

You can start your minikube cluster with

minikube start

Interacting with Kubernetes clusters is mostly done via the kubectl CLI. You can install the kubectl CLI on your machine by using the official installation instructions.


Even though containers are a very effective way of running your applications, container orchestration is the major challenge associated with them. Kubernetes exactly addresses this challenge and simplifies the container orchestration with features like Service discovery and load balancing, Horizontal scaling, Ease of deployment, Configuration and secrets management, Scheduler, Network rules management, providing access control based on different environments, users, and roles.

Companies across the world are now putting in a lot of effort to run their apps on the Kubernetes platform. One of the primary reasons for this is that with this platform, you can reduce the time it takes to deploy on the market. This can be attributed to more efficient app development efficiencies. In effect, it also paves the way for IT cost optimization. Another important reason why companies are putting in an effort to run their application on Kubernetes is its improved scalability as it is known for automatically scaling and improving app performances.

Did you find this article valuable?

Support vishnu chilamakuru by becoming a sponsor. Any amount is appreciated!