· Satrajit Sengupta · Kubernetes · 6 min read
Kubernetes pods — the smallest unit of deployment
A pod is not a container. Understanding the distinction — and why Kubernetes uses pods as its fundamental unit — is the first conceptual step to working effectively with Kubernetes.
If you’re coming from Docker, the natural assumption is that Kubernetes runs containers directly. It doesn’t. The fundamental unit of deployment in Kubernetes is the pod — and understanding what a pod is, why it exists, and how it differs from a container is the first conceptual step to using Kubernetes effectively.
What is a pod?
A pod is a group of one or more containers that share storage and network resources, and that are always scheduled and run together on the same node.
Think of a pod as a “logical host” for its containers — they share a network namespace (same IP address, same port space), can communicate via localhost, and can share storage volumes. From outside the pod, all this looks like a single unit.
In terms of computing resources, every pod gets:
- CPU — min and max limits for all containers in the pod
- Memory — min and max limits for all containers
- Storage — shared volumes or container-specific volumes
- Network — a single IP address assigned to the pod; all containers share it
Why pods and not containers directly?
The pod abstraction solves a specific problem: some applications are composed of tightly coupled processes that need to share resources. A web server and a log shipper that needs to read the web server’s log files are a natural example. They’re separate processes, but they belong together operationally.
Without pods, Kubernetes would need complex machinery to co-locate containers and give them shared access to storage and network. The pod model makes this first-class: you declare which containers belong together, and Kubernetes ensures they always run on the same node with shared resources.
Pods also enable Kubernetes’ replication story. By encapsulating containers into pods, Kubernetes can replicate, scale, and schedule the entire unit. A ReplicaSet says “run 3 copies of this pod” — not “run 3 copies of this container plus that other container in the same place”.
Pod architecture internals
Every pod contains a special container you never see in your manifests: the Pause container (also called the “sandbox” container). The Pause container holds the network namespace for the pod. All other containers in the pod join this namespace, which is why they share an IP and can communicate via localhost.
No container in the pod communicates directly with the host network. All communication goes through the network namespace managed by the Pause container, which connects to the host through the container runtime.
Types of pods
Single-container pod
The most common type. One pod, one container. The pod is just a wrapper that gives Kubernetes a schedulable, monitorable unit.
apiVersion: v1
kind: Pod
metadata:
name: webapp
spec:
containers:
- name: webapp
image: nginx:latestMulti-container pod
Multiple related containers that need to share resources. The canonical examples:
Sidecar pattern — a log aggregator container reads logs from the application container via a shared volume:
[app container] → writes logs → [shared volume] → [log shipper container]Ambassador pattern — a proxy container handles external communication on behalf of the main container (useful for adding TLS termination or retries without changing application code).
Adapter pattern — normalises the output of the main container into a format expected by external monitoring or logging infrastructure.
Multi-container pod with init containers
Init containers run to completion before any regular containers start. They’re useful for setup tasks that are prerequisites for the main application:
- Waiting for a database to be ready before starting the application
- Setting up configuration from a remote source using
sed,awk, or similar tools - Populating a shared volume with data the main container needs
Init containers must exit with code 0 for the pod to proceed. If an init container fails, Kubernetes restarts it according to the pod’s restart policy. Regular containers don’t start until all init containers have succeeded.
Multiple init containers are allowed — they run sequentially, one at a time.
Pods with ephemeral containers
Ephemeral containers are added to a running pod temporarily for debugging. They’re useful when:
kubectl execisn’t enough (the container has crashed, or lacks debugging tools)- You need to inspect the network or filesystem state of a running pod
Ephemeral containers are not restarted when they exit and aren’t included in the pod spec permanently.
kubectl debug -it my-pod --image=busybox --target=my-containerkubectl commands for pod management
Run a pod
kubectl run nginx --image=nginxRun a pod with labels
kubectl run nginx --image=nginx --labels "app=nginx,tier=web"Run a pod with custom command
# Use the default ENTRYPOINT but override CMD arguments
kubectl run busybox-app --image=busybox -- /bin/sh -c "sleep 1200"
# Override both ENTRYPOINT and CMD
kubectl run busybox-app --image=busybox --command -- /bin/sh -c "sleep 1200"Run a pod with resource requests and limits
kubectl run nginx --image=nginx \
--requests=cpu=200m,memory=256Mi \
--limits=cpu=400m,memory=512MiResource requests tell the scheduler how much capacity to reserve. Limits are the hard ceiling — the container is throttled (CPU) or OOM-killed (memory) if it exceeds them.
Check pod logs
kubectl logs nginx # single-container pod
kubectl logs mc-pod -c container1 # specific container in multi-container pod
kubectl logs -f nginx # follow (live tail)
kubectl logs --previous nginx # logs from the previous container instance (useful after crashes)Execute a command in a pod
kubectl exec nginx -- ls /usr/share/nginx/html
kubectl exec -it nginx -- bash # interactive shell
kubectl exec mc-pod -c container1 -- sleep 5 # specific containerDescribe a pod
kubectl describe pod nginxdescribe is your go-to for debugging. It shows the pod’s events, conditions, container statuses, resource requests, and the node it’s running on. When a pod isn’t starting, describe usually tells you why.
Check pod resource usage
kubectl top pod nginx
kubectl top pod nginx -c nginx-container # specific containerRequires the Metrics Server to be deployed in the cluster.
Delete a pod
kubectl delete pod nginxIf the pod was created by a Deployment or ReplicaSet, Kubernetes will immediately create a replacement. To permanently remove a pod, delete the controlling resource (the Deployment), not the pod directly.
One thing to remember
Pods are ephemeral. They’re not designed to be long-lived. A pod can be killed and replaced at any time — by a node failure, a rolling update, or a manual scale-down. Any data written to the pod’s writable container layer is gone when the pod is gone.
For stateful workloads, use PersistentVolumeClaims to attach durable storage that survives pod replacement. For stateless workloads, design containers that can start fresh and pull their configuration from environment variables or ConfigMaps.
The full reference for kubectl commands is at kubernetes.io/docs/reference.
Part of the Kubernetes series on The Digital Drift. Previous: Kubernetes — a beginner’s guide