· Satrajit Sengupta · Containers  · 6 min read

What are containers? The concept that changed how we ship software

Containers are one of those ideas that sound abstract until you understand the problem they solve — then you can't imagine going back.

Containers are one of those ideas that sound abstract until you understand the problem they solve — then you can't imagine going back.

“It works on my machine” — four words that have ended careers, delayed releases, and started more arguments in engineering teams than any other phrase in software development.

Containers exist because of this problem. Not as a clever hack, but as a genuine architectural solution to one of the oldest headaches in software: the gap between where code is written and where it runs.

The environment problem

When you write software, it doesn’t run in isolation. It depends on a specific version of a runtime (Python 3.11, not 3.9), specific system libraries, environment variables, file paths, and dozens of other invisible assumptions baked into every line of code.

A concrete example: your development machine has Java 11. The production server was set up six months earlier and has Java 8. The code compiles and tests pass locally. In production, it silently breaks in ways that take hours to diagnose. This is not an edge case — it’s the default outcome when environments aren’t explicitly managed.

The traditional solution was documentation: a README telling the next person what to install. Then came virtual machines, which solved the problem by shipping an entire operating system. Effective, but heavy — VMs take minutes to start, consume gigabytes of disk space, and duplicate the OS for every application.

Containers are the third approach: package the application along with its dependencies, configuration files, libraries, and binaries — but share the host operating system’s kernel. The result is isolation without the weight.

What a container actually is

A container is a running process with a constrained view of the world.

It can see its own filesystem (not the host’s), its own network namespace, its own process tree. From inside the container, it looks like a complete Linux environment. From outside, it’s just a process on the host machine — fast to start, cheap to run, and disposable.

The magic is in the image: a layered, read-only snapshot of everything the application needs. When you run a container, you’re starting a process from that image. The container is effectively immutable — you cannot modify it without rebuilding from its source (the Dockerfile). When the container stops, nothing persists unless you explicitly set up a volume to store data outside it.

This immutability is the key security and reproducibility insight. The environment is defined in code, version-controlled, and rebuildable anywhere by anyone on the team.

Containers vs virtual machines

Both solve the environment isolation problem, but at different levels of the stack.

Virtual machines use a hypervisor to virtualise physical hardware. Each VM runs its own guest operating system — a full kernel, init system, and userspace. Isolation is strong, but the overhead is significant. A typical VM image is several gigabytes; booting one means booting an entire OS, which takes tens of seconds at minimum.

Containers virtualise at the OS level instead. Multiple containers share the same host kernel — they just get isolated views of it via Linux kernel features (namespaces for isolation, cgroups for resource limits). A container image is typically a few megabytes. It starts in milliseconds, not seconds, because there’s no OS to boot.

Virtual MachinesContainers
Isolation levelHardware (hypervisor)OS kernel (namespaces)
Image sizeSeveral GBA few MB
Startup time30–60+ secondsMilliseconds
Resource overheadHigh (full guest OS)Low (shared kernel)
PortabilityGoodExcellent

The tradeoff: VMs provide stronger isolation (separate kernels mean a kernel exploit in one VM can’t affect others), while containers are faster and more efficient but share the host kernel’s attack surface. In practice, most workloads run containers, with VMs as the underlying compute layer (cloud instances are VMs running container workloads).

Why this matters in practice

Consistency across environments. The same image runs on your laptop, the CI server, staging, and production. If it works in one, it works in all — because the environment travels with the code.

Fast startup and scaling. Containers start in milliseconds, not minutes. This makes horizontal scaling, rolling deployments, and rapid recovery practical in ways that VM-based infrastructure isn’t.

Efficient resource use. Multiple containers share the host kernel. You can run dozens of isolated workloads on a machine that could only handle a handful of VMs.

Explicit dependencies. Everything the application needs is declared in the Dockerfile and checked into version control. No more tribal knowledge about what to install, or which version of a library was on the server that last worked.

Microservices architecture. Containers are the natural unit of deployment for microservices — each service gets its own container, its own network isolation, and its own explicit resource limits (CPU and memory caps enforced by the kernel).

Container runtimes

The container runtime is the software that actually runs containers on the host — it handles pulling images, creating namespaces, and starting processes. You interact with it through a higher-level tool like Docker, but it’s worth knowing the landscape:

  • runc — the reference OCI runtime, used by Docker and containerd under the hood
  • containerd — a higher-level runtime that manages the full container lifecycle; Docker’s own runtime since 2017
  • Docker — the most widely used toolchain; provides the image format, CLI, registry client, and build tools on top of containerd
  • CRI-O — a lightweight runtime built specifically for Kubernetes; implements the Container Runtime Interface (CRI) directly
  • Podman — a daemonless, rootless Docker-compatible alternative, common in RHEL environments
  • rkt (Rocket) — an early alternative runtime from CoreOS, now deprecated; worth knowing historically

For most purposes, Docker is the right starting point. The underlying runtimes matter when you’re working at the Kubernetes layer, where the choice of CRI implementation has operational consequences.

The standardisation layer: OCI

Before Docker (2013), container technology existed in Linux via cgroups and namespaces, but was too complex for most developers to use directly. Docker made it accessible. But Docker’s early dominance also created fragmentation risk — if everyone depended on Docker’s proprietary format, the ecosystem would be locked in.

The Open Container Initiative (OCI) was created in 2015 to standardise container image formats and runtime specifications. Today, every major runtime speaks OCI. An image built with Docker can run under containerd, CRI-O, or Podman without changes. This is the standard that makes the container ecosystem portable.


Post 1 of the Docker series on The Digital Drift. Next: Docker basics — images, containers, and the commands you’ll use every day.

Back to Blog

Related Posts

View All Posts »