
What are Kubernetes pods?
Kubernetes organizes containers into groups called pods — an abstraction that drives Kubernetes’ scheduling flexibility.
A pod is the fundamental scheduling unit in a Kubernetes implementation. When you want to deploy an instance of a microservice — say, a web server — Kubernetes places your container in a pod that serves as the context for that container, with its own namespace, storage volumes, and configuration data.
Each pod is made up of one or more containers. Single-container pods are most common, but in advanced use cases, pods may be made up of closely cooperative containers that share resources. But what exactly is the difference between a container and a pod, and why does Kubernetes use the pod abstraction in the first place?
Why Kubernetes uses pods
Containers bundle the code and dependencies for a microservice. But in a complex cloud ecosystem running at scale, a crucial problem arises: how can containers communicate with one another effectively?
Pods allow Kubernetes to simplify communication between containers, whether those containers share a pod or not. Because each pod has its own unique IP address, a given pod can coordinate with another pod in your deployment easily, regardless of the nodes on which they are located.
Moreover, if you are deploying containers that work together closely — a web server and a sidecar, for example — then they may be placed in the shared context of a pod, where they can simply communicate using localhost
. You may also use shared storage volumes and configurations to coordinate those processes.
Whether you are using single-container or multi-container pods, the Kubernetes pod serves as a kind of wrapper that frames your microservice as a process to be scheduled according to your needs. In other words, Kubernetes handles pods rather than handling the containers themselves. And this is how Kubernetes is able to scale flexibly, make updates without downtime, and otherwise orchestrate containerized applications for optimal efficiency.