· Satrajit · Containers · 5 min read
Docker networking — how containers talk to each other and the outside world
Docker creates three networks by default and lets you create more. Understanding which driver to use and when is what separates working setups from broken ones.
Networking is the part of Docker that most tutorials skip past too quickly. When containers can’t talk to each other, or can’t reach the internet, or are accidentally exposed to more of the network than intended — it’s almost always because of a misunderstanding about how Docker networking works.
This post covers every network driver, the key commands, and the decisions that matter in production.
What Docker creates by default
When Docker installs, it creates three networks automatically:
docker network ls
# NETWORK ID NAME DRIVER SCOPE
# abc123 bridge bridge local
# def456 host host local
# ghi789 none null localThe bridge (also called docker0) and host networks are core Docker components — you cannot remove them. The none network is also built-in. Any additional networks you create are user-defined and removable.
Network drivers
Bridge (default)
Bridge is the default driver. Every container you create gets attached to the default bridge network unless you specify otherwise. Containers on the bridge network can communicate with each other via IP address, and can reach the internet through NAT.
docker run --name web nginx:alpine # uses default bridge network
docker run --name web --network bridge nginx:alpine # explicit — same thingThe default bridge vs user-defined bridge — a critical distinction:
The default bridge network has a significant limitation: containers can only communicate by IP address. There’s no DNS resolution by container name. To connect two containers on the default bridge, you need --link (deprecated) or to look up IPs manually.
User-defined bridge networks solve this cleanly. When you create your own bridge network, Docker provides automatic DNS resolution by container name within that network. This is why you should always create a user-defined network for multi-container applications instead of using the default bridge.
Additional advantages of user-defined bridges:
- Better isolation: only containers explicitly attached to the network can communicate
- Containers can be attached and detached from user-defined networks on the fly (not possible with the default bridge)
- Each user-defined bridge is independently configurable (subnet, gateway, MTU)
Host
The host driver removes network isolation entirely. The container shares the host’s networking namespace — no separate IP address, no NAT, no port mapping.
docker run --name web --network host nginx
# Nginx now listens on port 80 of the HOST, not inside a container
# Access via http://localhost:80, not http://localhost:8080Use cases:
- High-throughput network applications where NAT overhead matters
- Monitoring tools that need to observe the host’s network interfaces directly
Limitation: Linux only. Host networking has no equivalent on Docker Desktop for macOS or Windows.
None
Completely disables networking for the container — only a loopback device (lo) is created.
docker run --name isolated --network none alpineUsed for batch jobs that don’t need network access, or in conjunction with a custom network driver.
Overlay
Overlay networks span multiple Docker hosts, enabling containers on different machines to communicate as if they were on the same local network. This is the Docker Swarm networking model.
docker swarm init # initialises a swarm on the current node
# Running this creates two additional networks:
# - ingress (overlay, scope: swarm) — for load balancing
# - docker_gwbridge (bridge) — for container-to-host communicationTo add a worker node to the swarm, the following ports must be open between nodes:
2377/TCP— cluster administration7946/TCPand7946/UDP— node-to-node communication4789/UDP— overlay network traffic (VXLAN)
Macvlan
Macvlan assigns a real MAC address to a container, making it appear as a physical device on the network. Traffic routes directly to the container by MAC address — no NAT.
docker network create \
--driver macvlan \
--subnet 192.168.1.0/24 \
--gateway 192.168.1.1 \
-o parent=eth0 \
macvlan-net
docker run --name alpine-test --network macvlan-net alpineUse case: legacy applications that require direct network access, or environments where containers need to be reachable from the physical network by their own IP.
Networking commands
Inspect a network
docker network inspect bridge
docker network inspect my-networkThe inspect output shows the subnet, gateway, driver, and — importantly — the list of containers currently attached and their IP addresses within the network.
Create a user-defined bridge network
docker network create my-network --driver bridge
# With custom subnet and gateway:
docker network create \
--subnet 192.168.2.0/24 \
--gateway 192.168.2.1 \
my-networkAttach a container to a network at creation
docker run --name web --network my-network nginx:alpineContainers on the same user-defined network can reach each other by container name:
docker exec web ping db # 'db' resolves to the db container's IP automaticallyThis is the DNS resolution that the default bridge network lacks.
Attach a network to a running container
Unlike volumes (which can only be attached at container creation), networks can be connected and disconnected from running containers:
docker network connect my-network my-running-container
docker network disconnect my-network my-running-containerAfter connecting, inspect the container to see all its network interfaces:
docker container inspect my-running-containerRemove networks
docker network rm my-network # remove a specific network
docker network prune # remove all unused user-defined networksYou cannot remove a network that has containers attached to it. Remove or disconnect the containers first.
A practical multi-container example
The right way to connect an application server to a database:
# Create an isolated network
docker network create app-network
# Start the database — container name 'postgres' becomes its DNS hostname
docker run -d \
--name postgres \
--network app-network \
-e POSTGRES_PASSWORD=secret \
postgres:15
# Start the application — it connects to 'postgres:5432' by name
docker run -d \
--name web \
--network app-network \
-p 8080:3000 \
my-app:latestThe app container reaches the database at postgres:5432 — Docker’s built-in DNS resolves the container name to the right IP. No hardcoded IP addresses, no --link flags, no manual configuration.
The default network vs user-defined: when it matters
| Default bridge | User-defined bridge | |
|---|---|---|
| Container DNS resolution | No (IP only) | Yes (by container name) |
| Isolation | All containers share it | Only attached containers |
| Dynamic attach/detach | No | Yes |
| Custom subnet/gateway | Affects all containers | Per-network |
In development, the default bridge is fine for quick experiments. In any multi-container setup — even locally with Docker Compose — always use a user-defined network.
Part of the Docker series on The Digital Drift. Related: Docker volumes · Build and deploy a multi-container app