· Satrajit Sengupta · Containers  Â· 5 min read

Build, deploy, and test a multi-container Docker application

Real applications aren't single containers. Here's how to build a Node.js + MongoDB two-container setup, wire them together over a custom network, and push the image to Docker Hub.

Real applications aren't single containers. Here's how to build a Node.js + MongoDB two-container setup, wire them together over a custom network, and push the image to Docker Hub.

The previous posts covered containers, images, volumes, and networking as separate topics. This post puts them together in a realistic scenario: building and deploying a Node.js application that talks to a MongoDB database, with each running in its own container.

By the end you’ll have two containers communicating over a user-defined bridge network, an application accessible from your browser, and an image ready to push to Docker Hub.

The architecture

[ Browser ] → port 8080
                  ↓
         [ app container ]   (Node.js on port 3012)
              ↓ connects via DNS name 'mongo'
         [ mongo container ] (MongoDB on port 27017)

         Both containers on: custom bridge network

The application container and database container sit on an isolated user-defined network. The app reaches the database by the container name mongo — Docker’s built-in DNS handles the resolution. The app’s port 3012 is published to the host on port 8080, so browsers can reach it.

Step 1: Write the Dockerfile for the application

FROM centos:latest

# Add the NodeSource repository for Node.js
RUN curl -sL https://rpm.nodesource.com/setup_14.x | bash -

# Install Node.js, Git, and cleanup
RUN yum install -y nodejs git && yum clean all

# Set working directory
WORKDIR /opt/app

# Clone or copy your application source
# If using a Git repo:
RUN git clone https://github.com/your-username/your-app.git .
# Or copy local source:
# COPY . .

# Install Node.js dependencies
RUN npm install

# The application listens on port 3012
EXPOSE 3012

# Start the application
CMD ["node", "app.js"]

Note on the base image: centos:latest is used here to match the original tutorial. For new projects, prefer node:20-alpine — it’s significantly smaller, uses a non-root user by default, and has a smaller attack surface. The CentOS-based approach is shown here because it’s representative of what you’ll find in enterprise environments.

Step 2: Build the application image

docker build -t adventure_app:v1 .

Watch the build output. Each RUN instruction is a new layer — you’ll see the Node.js install, the git clone, and the npm install as separate cached steps.

Verify the image was created:

docker images

The MongoDB image doesn’t need a custom build — Docker Hub’s official mongo image works directly.

Step 3: Create a custom bridge network

Don’t use the default bridge network here. The default bridge doesn’t support DNS resolution by container name, which means the app container can’t reach the database by calling mongo:27017.

docker network create app-network

Step 4: Start the MongoDB container

docker run --rm -itd \
  --name mongo \
  --network app-network \
  mongo:latest

The container name mongo is critical. The application’s database connection string references mongo as the hostname. Docker’s DNS resolves this name to the MongoDB container’s IP on the app-network network. If you name the container anything else, the app will fail to connect.

The --rm flag removes the container automatically when it stops. Remove this if you want the container to persist after stopping.

Verify it’s running:

docker ps

Step 5: Start the application container

docker run --rm -itd \
  --name adventure-app \
  --network app-network \
  --publish 8080:3012 \
  adventure_app:v1
  • --network app-network puts both containers on the same network, enabling DNS-based communication
  • --publish 8080:3012 maps port 3012 inside the container to port 8080 on the host

The order matters: start MongoDB before the application, otherwise the app may fail on the initial connection attempt.

Step 6: Test the application

Open http://<host-ip>:8080 in a browser (or http://localhost:8080 if running locally).

If the application has a user signup flow, create a test account. The signup data should be written to MongoDB in the mongo container.

To verify the data actually landed in MongoDB:

# Open a MongoDB shell inside the mongo container
docker exec -it mongo mongosh

# Inside mongosh:
show dbs
use your-database-name
show collections
db.users.find()   # or whatever collection your app writes to
exit

If you see your test data, the connection between the app and database is working correctly.

Step 7: Tag and push to Docker Hub

Once the image is working, push it to Docker Hub so it can be pulled on any machine.

Log in to Docker Hub:

docker login

Tag the image with your Docker Hub username:

docker tag adventure_app:v1 yourdockerhubuser/adventure_app:v1

Push:

docker push yourdockerhubuser/adventure_app:v1

Docker uploads only the layers that aren’t already in the registry. If you push a v2 later that reuses most of the same layers (e.g., you only changed application code, not the base image or dependencies), only the changed layers upload — which makes subsequent pushes fast.

What to try next

This two-container setup is functional but manual. Running it in production means:

  • Docker Compose — define both containers, the network, and volumes in a single docker-compose.yml file and start everything with docker compose up. Covered in the next post.
  • Health checks — add HEALTHCHECK to the Dockerfile so Docker knows when the app is actually ready, not just running.
  • Volume for MongoDB — add --mount source=mongo-data,destination=/data/db to the mongo docker run command so the database survives container restarts.

The current setup loses MongoDB data every time the container is removed. For any environment beyond throwaway testing, add a named volume.


Part of the Docker series on The Digital Drift. Previous: Docker networking · Next: Docker Compose (coming soon)

Back to Blog

Related Posts

View All Posts »