What are Kubernetes pods?

Kubernetes organizes containers into groups called pods — an abstraction that drives Kubernetes’ scheduling flexibility.

A pod is the fundamental scheduling unit in a Kubernetes implementation. When you want to deploy an instance of a microservice — say, a web server — Kubernetes places your container in a pod that serves as the context for that container, with its own namespace, storage volumes, and configuration data.

Each pod is made up of one or more containers. Single-container pods are most common, but in advanced use cases, pods may be made up of closely cooperative containers that share resources. But what exactly is the difference between a container and a pod, and why does Kubernetes use the pod abstraction in the first place?

Why Kubernetes uses pods

Containers bundle the code and dependencies for a microservice. But in a complex cloud ecosystem running at scale, a crucial problem arises: how can containers communicate with one another effectively?

Pods allow Kubernetes to simplify communication between containers, whether those containers share a pod or not. Because each pod has its own unique IP address, a given pod can coordinate with another pod in your deployment easily, regardless of the nodes on which they are located.

Single container pod

Moreover, if you are deploying containers that work together closely — a web server and a sidecar, for example — then they may be placed in the shared context of a pod, where they can simply communicate using localhost. You may also use shared storage volumes and configurations to coordinate those processes.

Multi container pod diagram

Whether you are using single-container or multi-container pods, the Kubernetes pod serves as a kind of wrapper that frames your microservice as a process to be scheduled according to your needs. In other words, Kubernetes handles pods rather than handling the containers themselves. And this is how Kubernetes is able to scale flexibly, make updates without downtime, and otherwise orchestrate containerized applications for optimal efficiency.

How do Kubernetes pods work?

Kubernetes pods may be created manually — but more often, they’re created automatically, in the course of a Kubernetes implementation’s scheduling. This is because pods tend to be ephemeral, meaning they are intended to be used for only a relatively short time.

After a Kubernetes pod is created, it is assigned to an appropriate compute resource (whether a physical or virtual machine) called a node, depending on its requirements. For many deployments, you may wish to run numerous instances of a microservice — in this case, you would create replicas of your pods, which might run across a cluster (or managed group) of nodes.

This structure simplifies many tasks for complex multi-container deployments, including:

  • Horizontal scaling
  • Failure tolerance
  • Zero-downtime updates

If a deployment requires more resources for a microservice, additional identical pods (called replicas) may be created and assigned. And if resource requirements lower again, Kubernetes can deprecate replicas seamlessly.

Managing pods with kubectl

Earlier, we said that pods are typically managed automatically, and this is true. But we do have the ability to manage pods manually using the Kubernetes command-line tool kubectl. For hands-on practice with pods, spin up a simple Kubernetes development environment and try the following commands:

In the command line, you can use kubectl to create a pod from the specifications in a YAML file. To do so, you’ll use the -f flag on the create command, including the filepath for your YAML file.

For this example, let’s use the following YAML file. Using the filename example-pod.yml, place this in your working directory:

apiVersion: v1
kind: Pod
metadata:
  name: example-pod
  labels:
    role: example-role
spec:
  containers:
    - name: web
      image: nginx
      ports:
        - name: web
          containerPort: 80
          protocol: TCP

Now, in the command line, use the following command to create your pod:

kubectl create -f example-pod.yml

Now your pod should be created. To confirm, you can check for information on the pods in your current namespace:

kubectl get pods

This should return a readout with information on the readiness, status, restarts, and age for all running pods:

NAME          READY   STATUS    RESTARTS   AGE
example-pod   1/1     Running   0          7m49s

Now let’s delete our example pod. We can do this in two principal ways: by pointing to the YAML manifest that created the pod…

kubectl delete -f example-pod.yml

Or you can simply refer to the pod by name, so long as it is in your default namespace:

kubectl delete example-pod

It’s important to note that, by default, deleted pods will continue running for a grace period that is usually 30 seconds. You can specify a grace period with the --grace-period flag. For pods in other namespaces, you can simply use the --namespace flag. And if you need to force delete a Kubernetes pod, you can use --force.

Let’s imagine, then, that example-pod is running in a namespace called dev and you want to force-delete it immediately. In this situation, you could use:

kubectl delete example-pod --grace-period=0 --force --namespace dev

Kubernetes pods for continuous deployment

While it is possible to manage pods manually, — Kubernetes is built for automation. In Kubernetes, replica pods are typically made and managed by an element of the software called a controller, which oversees the entire lifecycle of the replica set.

Of course, these capabilities can make a system more fault-tolerant as well as horizontally scalable. If a process — or a node — crashes, the system can create replicas and assign them to new nodes as needed.

And if a public-facing process needs to be updated, traffic may be smoothly directed to new pods with updated versions of the microservice, all without downtime.

In an environment of rapid and continuous deployment, Kubernetes pods allow for balanced, efficient, and flexible scheduling. With controllers apportioning pods to nodes for well-distributed resource sharing, Kubernetes supports increasingly complex and dynamic applications.