What are Kubernetes pods?
Kubernetes organizes containers into groups called pods — an abstraction that drives Kubernetes’ scheduling flexibility.
MKE, Lens, Kubernetes, Containerization

A pod is the fundamental scheduling unit in a Kubernetes implementation. When you want to deploy an instance of a microservice — say, a web server — Kubernetes places your container in a pod that serves as the context for that container, with its own namespace, storage volumes, and configuration data.
Each pod is made up of one or more containers. Single-container pods are most common, but in advanced use cases, pods may be made up of closely cooperative containers that share resources. But what exactly is the difference between a container and a pod, and why does Kubernetes use the pod abstraction in the first place?
Why Kubernetes uses pods
Containers bundle the code and dependencies for a microservice. But in a complex cloud ecosystem running at scale, a crucial problem arises: how can containers communicate with one another effectively?
Pods allow Kubernetes to simplify communication between containers, whether those containers share a pod or not. Because each pod has its own unique IP address, a given pod can coordinate with another pod in your deployment easily, regardless of the nodes on which they are located.
Moreover, if you are deploying containers that work together closely — a web server and a sidecar, for example — then they may be placed in the shared context of a pod, where they can simply communicate using localhost
. You may also use shared storage volumes and configurations to coordinate those processes.
Whether you are using single-container or multi-container pods, the Kubernetes pod serves as a kind of wrapper that frames your microservice as a process to be scheduled according to your needs. In other words, Kubernetes handles pods rather than handling the containers themselves. And this is how Kubernetes is able to scale flexibly, make updates without downtime, and otherwise orchestrate containerized applications for optimal efficiency.
How do Kubernetes pods work?
Kubernetes pods may be created manually — but more often, they’re created automatically, in the course of a Kubernetes implementation’s scheduling. This is because pods tend to be ephemeral, meaning they are intended to be used for only a relatively short time.
After a Kubernetes pod is created, it is assigned to an appropriate compute resource (whether a physical or virtual machine) called a node, depending on its requirements. For many deployments, you may wish to run numerous instances of a microservice — in this case, you would create replicas of your pods, which might run across a cluster (or managed group) of nodes.
This structure simplifies many tasks for complex multi-container deployments, including:
Horizontal scaling
Failure tolerance
Zero-downtime updates
If a deployment requires more resources for a microservice, additional identical pods (called replicas) may be created and assigned. And if resource requirements lower again, Kubernetes can deprecate replicas seamlessly.
Kubernetes pod lifecycle
Understanding the lifecycle of a Kubernetes pod is essential to mastering Kubernetes operations. From the moment you create a pod using kubectl to the time it’s terminated, each pod goes through a predictable series of phases: Pending, Running, Succeeded, or Failed. When you first create a pod kubectl assigns it to a node based on scheduling decisions and resource availability. The pod enters the “Pending” state while its container images are being pulled and resources are allocated. Once everything is ready, the pod transitions to “Running.”
Kubernetes continuously monitors pods. If a pod crashes or stops responding, depending on its configuration, it may be restarted, replaced, or left terminated. This built-in resiliency — managed by controllers — ensures your k8s pods behave predictably, even in complex deployments.
Whether you’re working with ephemeral or long-running pods, it’s helpful to use commands like kubectl show
pods or kubectl get pods
to monitor their status.
How do pods communicate with each other?
Each pod in Kubernetes is assigned a unique IP address, and containers within the same pod can communicate over localhost
, sharing the same network namespace. This setup is perfect for multi-container pods that need to tightly coordinate tasks, such as a web server container and a logging sidecar.
For communication across different pods — even across nodes — Kubernetes services act as stable endpoints, abstracting the underlying IPs of pods in Kubernetes. Whether you’re deploying a frontend that needs to talk to a backend service, or scaling microservices across a cluster, this IP-based networking and service abstraction simplifies orchestration.
And when troubleshooting, use commands like kubectl pods
or kubectl describe pod [pod-name]
to inspect how your pods Kubernetes are interacting and behaving.
Single container pods vs multi-container pods
Most often, developers start with single-container pods, where one container handles a distinct microservice. This keeps deployments clean, predictable, and aligned with best practices for containerization.
However, Kubernetes also supports multi-container pods — helpful in tightly coupled scenarios where containers must share storage or memory. For example, a helper container might pull configuration files or handle metrics export, running alongside your main app in the same pod. This pattern is often referred to as the sidecar pattern, and it’s foundational for observability, logging, and security in cloud-native environments.
So when you’re deciding between containers vs pods, the key difference is: containers are the app runtimes, while pods are the deployable units that contain one or more containers and define their shared context. Kubernetes doesn't schedule containers — it schedules pods.
Managing pods with kubectl
Earlier, we said that pods are typically managed automatically, and this is true. But we do have the ability to manage pods manually using the Kubernetes command-line tool kubectl. For hands-on practice with pods, spin up a simple Kubernetes development environment and try the following commands:
In the command line, you can use kubectl
to create a pod from the specifications in a YAML file. To do so, you’ll use the -f
flag on the create command
, including the filepath for your YAML file.
For this example, let’s use the following YAML file. Using the filename example-pod.yml
, place this in your working directory:
apiVersion: v1
kind: Pod
metadata:
name: example-pod
labels:
role: example-role
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
protocol: TCP
Now, in the command line, use the following command to create your pod:
kubectl create -f example-pod.yml
Now your pod should be created. To confirm, you can check for information on the pods in your current namespace:
kubectl get pods
This should return a readout with information on the readiness, status, restarts, and age for all running pods:
NAME READY STATUS RESTARTS AGE
example-pod 1/1 Running 0 7m49s
Now let’s delete our example pod. We can do this in two principal ways: by pointing to the YAML manifest that created the pod…
kubectl delete -f example-pod.yml
Or you can simply refer to the pod by name, so long as it is in your default namespace:
kubectl delete example-pod
It’s important to note that, by default, deleted pods will continue running for a grace period that is usually 30 seconds. You can specify a grace period with the --grace-period
flag. For pods in other namespaces, you can simply use the --namespace
flag. And if you need to force delete a Kubernetes pod, you can use --force
.
Let’s imagine, then, that example-pod
is running in a namespace called dev
and you want to force-delete it immediately. In this situation, you could use:
kubectl delete example-pod --grace-period=0 --force --namespace dev
Keeping Kubernetes pods healthy
Keeping Kubernetes pods healthy is critical for maintaining uptime and ensuring smooth operation of your containerized workloads. Kubernetes uses readiness and liveness probes to monitor the health of pods. A liveness probe checks if a container inside a pod is still running, while a readiness probe checks if it's ready to accept traffic. If a container fails a liveness probe, Kubernetes can automatically restart the pod to recover it — no human intervention required.
For manual inspection, you can use kubectl get pods
or kubectl describe pod [pod-name]
to see detailed health information. If you're trying to figure out what's going wrong, kubectl logs [pod-name]
is your best friend.
And if a pod enters a crash loop or becomes unresponsive, sometimes the fastest path to resolution is to kubectl delete a pod and let the controller spin up a replacement. Kubernetes makes sure that healthy replicas replace failed pods seamlessly, helping you stay resilient even in dynamic, large-scale environments.
Whether you're operating a development cluster or managing pods in Kubernetes at scale, maintaining pod health ensures your microservices keep running smoothly — and that your team spends more time building, less time firefighting.
Kubernetes pods for continuous deployment
While it is possible to manage pods manually, — Kubernetes is built for automation. In Kubernetes, replica pods are typically made and managed by an element of the software called a controller, which oversees the entire lifecycle of the replica set.
Of course, these capabilities can make a system more fault-tolerant as well as horizontally scalable. If a process — or a node — crashes, the system can create replicas and assign them to new nodes as needed.
And if a public-facing process needs to be updated, traffic may be smoothly directed to new pods with updated versions of the microservice, all without downtime.
In an environment of rapid and continuous deployment, Kubernetes pods allow for balanced, efficient, and flexible scheduling. With controllers apportioning pods to nodes for well-distributed resource sharing, Kubernetes supports increasingly complex and dynamic applications.
Learn more about enterprise Kubernetes.
Learn more about production Kubernetes.
Learn more about the role of secure container runtime.
Learn more about the importance of a secure registry.
Or Download Mirantis Kubernetes Engine or K0S – zero friction Kubernetes.