· Satrajit Sengupta · Containers · 5 min read
Docker volumes — persistent storage for containers that need it
Containers are ephemeral by design. Docker volumes are how you keep data alive across container restarts, removals, and re-deploys — here's how they work and when to use them.
Container storage is ephemeral by design. When a container is removed, everything written to its filesystem disappears with it. For a stateless web server, that’s fine. For a database, a log aggregator, or anything that generates data you actually need — that’s a problem.
Docker volumes solve this. They’re the recommended mechanism for persisting data from containers, and once you understand how they work, they’re straightforward to use.
The three storage options
Docker offers three types of mounts for getting data in and out of containers:
Volumes — managed by Docker, stored in a part of the host filesystem that non-Docker processes shouldn’t touch (/var/lib/docker/volumes/). The recommended choice for most persistent data scenarios.
Bind mounts — a specific directory from the host filesystem mounted directly into the container, referenced by its absolute path. Useful during development (mount your source code directory into a container), but fragile in production because the host directory structure must be correct.
tmpfs mounts — stored in the host’s memory only, never written to disk. Good for temporary sensitive data (credentials, session tokens) that you don’t want to persist anywhere.
This post focuses on volumes. Bind mounts are covered separately in the context where they’re most useful: local development workflows.
Why volumes over bind mounts for production
- Docker manages the volume storage directory — non-Docker processes don’t touch it, which means no accidental modification from the host
- Volumes work the same on Linux and Windows hosts; bind mounts are path-dependent
- Volumes can be backed up, migrated, and inspected with Docker CLI and API
- Volumes don’t increase the size of the containers using them — the volume’s contents exist outside the container’s writable layer lifecycle
- Volumes can be shared safely across multiple containers simultaneously
- Volume drivers allow you to store data remotely — on NFS, cloud object storage, or encrypted block devices
Common use cases
Database containers. MySQL, PostgreSQL, MongoDB — any containerised database needs a volume. Without one, every container restart or re-deploy loses all your data.
Application logs. If your log aggregation pipeline reads from the host filesystem, mount a volume so container logs land in a known, persistent location.
Shared data between containers. Multiple containers can mount the same volume simultaneously. Useful for a worker and a web server that both need access to uploaded files.
Volume commands
List all volumes
docker volume lsCreate a volume
docker volume create my-volumeInspect a volume
docker volume inspect my-volumeThe inspect output shows you the creation time, the driver, the mount point on the host filesystem, and the volume name. The Mountpoint field tells you exactly where Docker stores the data on the host.
Mount a volume to a container (two syntaxes)
Both of these do the same thing — mount my-volume to /data inside the container:
Using -v (short syntax):
docker run -d \
-v my-volume:/data \
--name my-container \
nginx:alpineUsing --mount (explicit syntax):
docker run -d \
--mount source=my-volume,destination=/data \
--name my-container \
nginx:alpine--mount is more verbose but clearer — recommended in scripts and documentation. -v is faster to type interactively.
Verify the mount:
docker inspect my-container --format='{{.Mounts}}'Mount a volume read-only
If a container should only read from a volume (not write to it), mount it read-only:
docker run -d \
--mount source=my-volume,destination=/data,readonly \
--name reader-container \
nginx:alpineAny attempt to write to /data inside the container will fail with “Read-only file system”. This is the right approach for configuration volumes or shared reference data.
How volume data syncs
When you create a file inside a container at the mounted path, it appears immediately on the host at the volume’s mount point — and vice versa. When you remove the container, the volume (and its data) persists. The next container that mounts the same volume sees the same data.
# Create a file inside the container
docker exec -it my-container sh -c "echo 'hello' > /data/testfile.txt"
# Verify it exists on the host
cat /var/lib/docker/volumes/my-volume/_data/testfile.txt
# Output: helloRemove volumes
docker volume rm my-volume # remove a specific volume
docker volume prune # remove all volumes not mounted to any containerYou cannot remove a volume that is currently mounted to a container (running or stopped). Remove the container first:
docker rm my-container
docker volume rm my-volumeVolume drivers
By default, volumes use the local driver — data stored on the Docker host’s filesystem. Docker’s volume driver abstraction lets you swap the storage backend without changing application code:
- local — default; host filesystem
- NFS — volume data stored on a network filesystem
- Cloud provider drivers (AWS EFS, Azure Files, GCP Filestore) — managed cloud storage
- REX-Ray — open-source driver for block and object storage across cloud providers
To use a non-default driver, install the plugin first, then specify it at volume creation:
docker volume create --driver <driver-name> my-cloud-volumeThis is the key architectural advantage of volumes over bind mounts: you can migrate from local storage to cloud storage by changing the driver, without touching your container configuration.
A real-world example: persistent MongoDB
# Create a named volume for MongoDB data
docker volume create mongo-data
# Run MongoDB with the volume mounted at its default data path
docker run -d \
--name mongodb \
--mount source=mongo-data,destination=/data/db \
mongo:6
# Stop and remove the container
docker stop mongodb && docker rm mongodb
# Start a new MongoDB container — the data is still there
docker run -d \
--name mongodb \
--mount source=mongo-data,destination=/data/db \
mongo:6The database survives container removal because the data lives in the volume, not in the container’s writable layer.
Part of the Docker series on The Digital Drift. Related: Docker networking · Docker commands reference