Docker Compose: End-to-End

A visual breakdown of how docker compose defines, networks, and runs multiple services as one application.

1. Compose File

Your declarative definition of the app.

services:
  mongodb:
    image: mongo
  mongo-express:
    image: mongo-express
    depends_on: [mongodb]

Key idea

Compose turns a set of containers into a single “application unit”.

2. Docker Compose CLI

You apply the desired state with one command.

$ docker compose up -d
$ docker compose logs -f
$ docker compose down

What Compose does

Creates a project network
Creates containers
Wires DNS by service name
Optionally builds images
What Compose Creates (per project)
Network
project_default

All services join this network by default.

Service names become DNS names (e.g., mongodb).

Containers
<project>-mongodb-1

<project>-mongo-express-1

Containers are namespaced by the project name.

(Optional) Volumes
db_data

Use volumes to keep DB data across restarts.

Example Stack (from your notes)

All services share one Compose network. Both mongo-express and my-app connect directly to mongodb.

mongo-express
ports: 8081:8081
depends_on: mongodb

Web UI container.

connects to mongodb:27017
my-app
ports: 3000:3000
env: MONGO_URL=mongodb

Application container.

connects to mongodb:27017
mongodb
port: 27017 volume: db_data

Database container with optional persistence.

From your laptop, you typically hit localhost:8081 (mongo-express) and localhost:3000 (my-app).

What to remember
Compose gives you one network and DNS by service name.
depends_on controls start order (not readiness).
down removes containers + network; add volumes for persistence.
Lifecycle (most used commands)
Start / create
docker compose up -d

Create network + containers and start them.

Logs
docker compose logs -f

Aggregated logs from all services.

Stop / start
docker compose stop

docker compose start

Keep containers; stop the processes.

Remove
docker compose down

Remove containers + network (and optionally volumes).

Where Compose Fits (and where it doesn’t)
Great for
  • Local dev stacks (app + DB + cache)
  • Small single-host deployments
  • Reproducible demos
Not great for
  • Large-scale multi-node orchestration
  • Auto-scaling, self-healing across a cluster
  • Production-grade scheduling (use Kubernetes)