Docker Compose

Docker Compose is one of the most powerful and useful Docker tools. Let's dive in!


Part 1: What is Docker Compose?

The Problem Docker Compose Solves

Imagine you built a multi-container application:

Your Application needs:
├── Web server (Nginx)
├── API server (Python Flask)
├── Database (PostgreSQL)
├── Cache (Redis)
└── Message Queue (RabbitMQ)

5 containers to manage!

Without Docker Compose:

# Create networks
docker network create frontend
docker network create backend

# Start database
docker run -d \
  --name postgres \
  --network backend \
  -e POSTGRES_PASSWORD=secret \
  -e POSTGRES_DB=myapp \
  -v postgres-data:/var/lib/postgresql/data \
  postgres:15

# Start Redis
docker run -d \
  --name redis \
  --network backend \
  redis:7

# Start API
docker run -d \
  --name api \
  --network backend \
  -e DATABASE_URL=postgresql://postgres:secret@postgres:5432/myapp \
  -e REDIS_URL=redis://redis:6379 \
  my-api

docker network connect frontend api

# Start Web
docker run -d \
  --name web \
  --network frontend \
  -p 80:80 \
  my-web

# That's a LOT of commands! 😰
# And you need to remember all of them!
# Starting, stopping, updating... nightmare!

Docker Compose Solution

With Docker Compose, ONE file describes everything:

# docker-compose.yml
version: '3.8'

services:
  postgres:
    image: postgres:15
    environment:
      POSTGRES_PASSWORD: secret
      POSTGRES_DB: myapp
    volumes:
      - postgres-data:/var/lib/postgresql/data

  redis:
    image: redis:7

  api:
    build: ./api
    environment:
      DATABASE_URL: postgresql://postgres:secret@postgres:5432/myapp
      REDIS_URL: redis://redis:6379
    depends_on:
      - postgres
      - redis

  web:
    build: ./web
    ports:
      - "80:80"
    depends_on:
      - api

volumes:
  postgres-data:

Now just run:

docker-compose up

That's it! All 5 containers started with proper configuration! ✓


What is Docker Compose?

Simple Definition:

Docker Compose = Tool for defining and running 
                 multi-container Docker applications

Key features:
├── Define everything in YAML file
├── Single command to start/stop all containers
├── Automatic network creation
├── Volume management
├── Service dependencies
└── Easy scaling

Think of it as:

Recipe Book (docker-compose.yml):
├── Lists all ingredients (services)
├── Preparation steps (configuration)
├── Cooking order (dependencies)
└── Final presentation (ports, networks)

One command to cook the entire meal! 🍽️

Part 2: Installing Docker Compose

Checking if Docker Compose is Installed

Docker Desktop includes Docker Compose!

docker-compose --version

Output:

Docker Compose version v2.24.5

✓ Already installed with Docker Desktop!


Docker Compose v1 vs v2

Two versions exist:

Docker Compose v1:
├── Separate tool
├── Command: docker-compose (with hyphen)
└── Older version

Docker Compose v2:
├── Integrated into Docker CLI
├── Command: docker compose (space, no hyphen)
└── Newer, faster version

Both work, but v2 is recommended:

# v1 syntax (old)
docker-compose up

# v2 syntax (new, recommended)
docker compose up

For this tutorial, we'll use v2 syntax (docker compose), but v1 also works!


Part 3: Docker Compose File Basics

Creating Your First docker-compose.yml

Docker Compose uses YAML format.

YAML Basics (Quick!):

# Comments start with #

# Key-value pairs
name: value

# Nested structure (indentation matters!)
parent:
  child: value
  another_child: value

# Lists
items:
  - item1
  - item2
  - item3

# Multi-line strings
description: |
  This is a
  multi-line
  string

⚠️ Important: YAML is VERY sensitive to indentation! Use spaces, not tabs!


Basic docker-compose.yml Structure

version: '3.8'  # Compose file version

services:       # Define containers
  service1:
    # Configuration for service1
  
  service2:
    # Configuration for service2

volumes:        # Define volumes (optional)
  volume1:

networks:       # Define networks (optional)
  network1:

Example 1: Single Service (Nginx)

docker-compose.yml:

version: '3.8'

services:
  web:
    image: nginx:alpine
    ports:
      - "8080:80"

That's it! Now run:

docker compose up

Output:

[+] Running 1/1
 ✔ Container project-web-1  Started
 
Attaching to web-1
web-1  | /docker-entrypoint.sh: Configuration complete
web-1  | nginx: [notice] starting nginx...

Open browser: http://localhost:8080

✓ Nginx running!

To stop:

# Press Ctrl+C

# Or in another terminal:
docker compose down

Understanding Service Names

In docker-compose.yml:

services:
  web:      # ← This is the service name

Docker Compose creates container with name:

project-web-1
  ↑     ↑   ↑
  │     │   └── Instance number
  │     └── Service name
  └── Project name (directory name)

Part 4: Service Configuration Options

Common Service Options

Let's explore all important options:


1. image - Use Existing Image

services:
  db:
    image: postgres:15
    # Uses official PostgreSQL image from Docker Hub

2. build - Build from Dockerfile

services:
  api:
    build: ./api
    # Builds from Dockerfile in ./api directory

Or with more options:

services:
  api:
    build:
      context: ./api        # Directory with Dockerfile
      dockerfile: Dockerfile.prod  # Custom Dockerfile name
      args:                 # Build arguments
        VERSION: 1.0

3. ports - Port Mapping

services:
  web:
    image: nginx
    ports:
      - "8080:80"       # Host:Container
      - "8443:443"

Format:

ports:
  - "HOST_PORT:CONTAINER_PORT"

4. environment - Environment Variables

services:
  db:
    image: postgres:15
    environment:
      POSTGRES_PASSWORD: secret
      POSTGRES_DB: myapp
      POSTGRES_USER: admin

Or from file:

services:
  api:
    image: my-api
    env_file:
      - .env        # Load from .env file

.env file:

DATABASE_URL=postgresql://localhost/mydb
API_KEY=abc123
DEBUG=true

5. volumes - Data Persistence

Named volume:

services:
  db:
    image: postgres:15
    volumes:
      - postgres-data:/var/lib/postgresql/data

volumes:
  postgres-data:    # Define volume

Bind mount:

services:
  web:
    image: nginx
    volumes:
      - ./html:/usr/share/nginx/html    # Host:Container

Multiple volumes:

services:
  app:
    image: my-app
    volumes:
      - app-data:/data          # Named volume
      - ./config:/app/config    # Bind mount
      - ./logs:/app/logs        # Another bind mount

6. depends_on - Service Dependencies

services:
  web:
    image: nginx
    depends_on:
      - api         # Start api before web

  api:
    image: my-api
    depends_on:
      - db          # Start db before api

  db:
    image: postgres

Start order: db → api → web

⚠️ Note: depends_on only waits for container to START, not for it to be READY!


7. networks - Custom Networks

services:
  web:
    image: nginx
    networks:
      - frontend

  api:
    image: my-api
    networks:
      - frontend
      - backend

  db:
    image: postgres
    networks:
      - backend

networks:
  frontend:
  backend:

8. restart - Restart Policy

services:
  api:
    image: my-api
    restart: always
    # Options: no, always, on-failure, unless-stopped

Options:

no              = Never restart
always          = Always restart (even after reboot)
on-failure      = Restart only if exit code != 0
unless-stopped  = Always restart unless manually stopped

9. command - Override Default Command

services:
  db:
    image: postgres
    command: postgres -c max_connections=200
    # Overrides default command

10. container_name - Custom Container Name

services:
  db:
    image: postgres
    container_name: my-postgres-db
    # Instead of default: project-db-1

Part 5: Complete Example - Web Application

Building a Full Application

Let's create: Web Frontend + API Backend + PostgreSQL Database


Project Structure

my-app/
├── docker-compose.yml
├── web/
│   ├── Dockerfile
│   └── index.html
├── api/
│   ├── Dockerfile
│   ├── app.py
│   └── requirements.txt
└── .env

Step 1: Create Project Directory

mkdir my-app
cd my-app
mkdir web api

Step 2: Create API

api/app.py:

from flask import Flask, jsonify
import psycopg2
import os
import time

app = Flask(__name__)

def get_db():
    # Wait for database to be ready
    max_retries = 30
    for i in range(max_retries):
        try:
            conn = psycopg2.connect(
                host=os.getenv('DB_HOST', 'db'),
                database=os.getenv('DB_NAME', 'myapp'),
                user=os.getenv('DB_USER', 'postgres'),
                password=os.getenv('DB_PASSWORD', 'secret')
            )
            return conn
        except psycopg2.OperationalError:
            if i < max_retries - 1:
                time.sleep(1)
            else:
                raise

@app.route('/api/status')
def status():
    return jsonify({
        'status': 'ok',
        'message': 'API is running!'
    })

@app.route('/api/db-check')
def db_check():
    try:
        db = get_db()
        cursor = db.cursor()
        cursor.execute('SELECT version()')
        version = cursor.fetchone()[0]
        db.close()
        return jsonify({
            'status': 'ok',
            'database': version
        })
    except Exception as e:
        return jsonify({
            'status': 'error',
            'message': str(e)
        }), 500

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

api/requirements.txt:

flask==3.0.0
psycopg2-binary==2.9.9

api/Dockerfile:

FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY app.py .

CMD ["python", "app.py"]

Step 3: Create Web Frontend

web/index.html:

<!DOCTYPE html>
<html>
<head>
    <title>My Docker Compose App</title>
    <style>
        body {
            font-family: Arial, sans-serif;
            max-width: 800px;
            margin: 50px auto;
            padding: 20px;
            background-color: #f5f5f5;
        }
        .container {
            background: white;
            padding: 30px;
            border-radius: 10px;
            box-shadow: 0 2px 10px rgba(0,0,0,0.1);
        }
        button {
            background: #007bff;
            color: white;
            border: none;
            padding: 10px 20px;
            border-radius: 5px;
            cursor: pointer;
            margin: 5px;
        }
        button:hover {
            background: #0056b3;
        }
        #result {
            background: #f8f9fa;
            padding: 15px;
            border-radius: 5px;
            margin-top: 20px;
            white-space: pre-wrap;
        }
    </style>
</head>
<body>
    <div class="container">
        <h1>🐳 Docker Compose Demo App</h1>
        <p>This demonstrates a multi-container application with Docker Compose!</p>
        
        <div>
            <button onclick="checkAPI()">Check API Status</button>
            <button onclick="checkDB()">Check Database</button>
        </div>
        
        <div id="result"></div>
    </div>
    
    <script>
        async function checkAPI() {
            const result = document.getElementById('result');
            result.textContent = 'Loading...';
            
            try {
                const response = await fetch('/api/status');
                const data = await response.json();
                result.textContent = JSON.stringify(data, null, 2);
            } catch (error) {
                result.textContent = 'Error: ' + error.message;
            }
        }
        
        async function checkDB() {
            const result = document.getElementById('result');
            result.textContent = 'Loading...';
            
            try {
                const response = await fetch('/api/db-check');
                const data = await response.json();
                result.textContent = JSON.stringify(data, null, 2);
            } catch (error) {
                result.textContent = 'Error: ' + error.message;
            }
        }
    </script>
</body>
</html>

web/nginx.conf:

server {
    listen 80;
    
    location / {
        root /usr/share/nginx/html;
        index index.html;
    }
    
    location /api/ {
        proxy_pass http://api:5000/api/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

web/Dockerfile:

FROM nginx:alpine

COPY index.html /usr/share/nginx/html/
COPY nginx.conf /etc/nginx/conf.d/default.conf

Step 4: Create docker-compose.yml

docker-compose.yml:

version: '3.8'

services:
  # PostgreSQL Database
  db:
    image: postgres:15-alpine
    container_name: myapp-postgres
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: secret
    volumes:
      - postgres-data:/var/lib/postgresql/data
    networks:
      - backend
    restart: unless-stopped
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5

  # API Backend
  api:
    build: ./api
    container_name: myapp-api
    environment:
      DB_HOST: db
      DB_NAME: myapp
      DB_USER: postgres
      DB_PASSWORD: secret
    depends_on:
      db:
        condition: service_healthy
    networks:
      - frontend
      - backend
    restart: unless-stopped

  # Web Frontend
  web:
    build: ./web
    container_name: myapp-web
    ports:
      - "8080:80"
    depends_on:
      - api
    networks:
      - frontend
    restart: unless-stopped

volumes:
  postgres-data:

networks:
  frontend:
  backend:

Step 5: Run the Application

Start everything:

docker compose up

Or run in background:

docker compose up -d

Output:

[+] Running 5/5
 ✔ Network myapp_frontend        Created
 ✔ Network myapp_backend         Created
 ✔ Volume "myapp_postgres-data"  Created
 ✔ Container myapp-postgres      Started
 ✔ Container myapp-api           Started
 ✔ Container myapp-web           Started

Open browser: http://localhost:8080

Click buttons to test! ✓


Part 6: Docker Compose Commands

Essential Commands

Start services:

# Start in foreground (see logs)
docker compose up

# Start in background (detached)
docker compose up -d

# Rebuild images and start
docker compose up --build

# Start specific service
docker compose up web

Stop services:

# Stop (keeps containers)
docker compose stop

# Stop and remove containers
docker compose down

# Stop, remove containers, volumes, and networks
docker compose down -v

# Remove everything including images
docker compose down --rmi all

View logs:

# All services
docker compose logs

# Follow logs (real-time)
docker compose logs -f

# Specific service
docker compose logs api

# Last 100 lines
docker compose logs --tail=100

List services:

docker compose ps

Output:

NAME                IMAGE           STATUS    PORTS
myapp-web           myapp-web       Up        0.0.0.0:8080->80/tcp
myapp-api           myapp-api       Up
myapp-postgres      postgres:15     Up

Execute commands in service:

# Open shell in service
docker compose exec api bash

# Run command
docker compose exec db psql -U postgres

# Run as different user
docker compose exec -u root api bash

View service configuration:

docker compose config

Shows resolved configuration with all variables substituted.


Restart services:

# Restart all
docker compose restart

# Restart specific service
docker compose restart api

Scale services:

# Run 3 instances of api
docker compose up -d --scale api=3

Build images:

# Build all images
docker compose build

# Build specific service
docker compose build api

# Build without cache
docker compose build --no-cache

Pull images:

# Pull all images
docker compose pull

# Pull specific service
docker compose pull db

Part 7: Environment Variables and .env Files

Using .env File

Create .env file:

# .env
DB_NAME=myapp
DB_USER=postgres
DB_PASSWORD=supersecret
API_PORT=5000
WEB_PORT=8080

docker-compose.yml:

version: '3.8'

services:
  db:
    image: postgres:15
    environment:
      POSTGRES_DB: ${DB_NAME}
      POSTGRES_USER: ${DB_USER}
      POSTGRES_PASSWORD: ${DB_PASSWORD}

  api:
    build: ./api
    environment:
      DATABASE_URL: postgresql://${DB_USER}:${DB_PASSWORD}@db:5432/${DB_NAME}
    ports:
      - "${API_PORT}:5000"

  web:
    build: ./web
    ports:
      - "${WEB_PORT}:80"

Variables automatically loaded from .env! ✓


Multiple Environment Files

# Use different env file
docker compose --env-file .env.production up

# Override with another file
docker compose --env-file .env.local up

Passing Environment Variables

# From command line
DB_PASSWORD=newsecret docker compose up

# System environment variables
export DB_PASSWORD=newsecret
docker compose up

Part 8: Profiles (Conditional Services)

What are Profiles?

Run different sets of services for different scenarios.

Example:

version: '3.8'

services:
  # Always run
  web:
    image: nginx
    ports:
      - "80:80"

  api:
    image: my-api
    depends_on:
      - db

  db:
    image: postgres

  # Only for development
  adminer:
    image: adminer
    profiles:
      - dev
    ports:
      - "8080:8080"

  # Only for debugging
  debug-tools:
    image: nicolaka/netshoot
    profiles:
      - debug
    command: sleep infinity

Usage:

# Start only core services
docker compose up

# Start with dev profile (includes adminer)
docker compose --profile dev up

# Start with debug profile
docker compose --profile debug up

# Start with multiple profiles
docker compose --profile dev --profile debug up

Part 9: Healthchecks

Adding Healthchecks

Healthcheck = Test if service is actually ready

services:
  db:
    image: postgres:15
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s      # Check every 10 seconds
      timeout: 5s        # Fail if takes > 5 seconds
      retries: 3         # Try 3 times before giving up
      start_period: 30s  # Grace period on startup

  api:
    build: ./api
    depends_on:
      db:
        condition: service_healthy  # Wait for db to be healthy!
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:5000/health"]
      interval: 30s
      timeout: 10s
      retries: 3

Benefits:

Without healthcheck:
├── depends_on waits for container to start
├── But container might not be ready yet!
└── API tries to connect to DB → Fails! ✗

With healthcheck:
├── depends_on waits for service to be HEALTHY
├── Container started AND ready
└── API connects successfully ✓

Part 10: Advanced Example - Full Stack Application

Complete Real-World Example

docker-compose.yml:

version: '3.8'

services:
  # Nginx Reverse Proxy
  nginx:
    image: nginx:alpine
    container_name: nginx-proxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/ssl:/etc/nginx/ssl:ro
    depends_on:
      - web
      - api
    networks:
      - frontend
    restart: always

  # Frontend (React)
  web:
    build:
      context: ./frontend
      args:
        NODE_ENV: production
    container_name: react-app
    environment:
      - REACT_APP_API_URL=http://localhost/api
    networks:
      - frontend
    restart: always

  # Backend API (Node.js)
  api:
    build: ./backend
    container_name: nodejs-api
    environment:
      NODE_ENV: production
      DB_HOST: postgres
      DB_PORT: 5432
      DB_NAME: ${DB_NAME}
      DB_USER: ${DB_USER}
      DB_PASSWORD: ${DB_PASSWORD}
      REDIS_HOST: redis
      REDIS_PORT: 6379
      JWT_SECRET: ${JWT_SECRET}
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_started
    networks:
      - frontend
      - backend
    restart: always
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3

  # PostgreSQL Database
  postgres:
    image: postgres:15-alpine
    container_name: postgres-db
    environment:
      POSTGRES_DB: ${DB_NAME}
      POSTGRES_USER: ${DB_USER}
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    volumes:
      - postgres-data:/var/lib/postgresql/data
      - ./database/init.sql:/docker-entrypoint-initdb.d/init.sql:ro
    networks:
      - backend
    restart: always
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${DB_USER}"]
      interval: 10s
      timeout: 5s
      retries: 5

  # Redis Cache
  redis:
    image: redis:7-alpine
    container_name: redis-cache
    command: redis-server --appendonly yes
    volumes:
      - redis-data:/data
    networks:
      - backend
    restart: always

  # Database Admin (Development only)
  adminer:
    image: adminer
    container_name: db-admin
    ports:
      - "8080:8080"
    depends_on:
      - postgres
    networks:
      - backend
    profiles:
      - dev
    restart: unless-stopped

volumes:
  postgres-data:
    driver: local
  redis-data:
    driver: local

networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge

.env:

# Database
DB_NAME=myapp
DB_USER=appuser
DB_PASSWORD=strongpassword123

# JWT
JWT_SECRET=your-secret-key-change-in-production

# Node
NODE_ENV=production

Usage:

# Production (no adminer)
docker compose up -d

# Development (with adminer)
docker compose --profile dev up -d

# View logs
docker compose logs -f

# Stop
docker compose down

Part 11: Docker Compose Best Practices

1. Use Specific Image Tags

Bad:

services:
  db:
    image: postgres  # Latest version, unpredictable!

Good:

services:
  db:
    image: postgres:15-alpine  # Specific version

2. Use .env for Sensitive Data

Bad:

services:
  db:
    environment:
      POSTGRES_PASSWORD: hardcoded-password  # Never do this!

Good:

services:
  db:
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}  # From .env file

Add .env to .gitignore!


3. Use Named Volumes

Bad:

volumes:
  - ./data:/var/lib/postgresql/data  # Bind mount

Good:

volumes:
  - postgres-data:/var/lib/postgresql/data  # Named volume

volumes:
  postgres-data:

4. Add Healthchecks

services:
  api:
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3

5. Use Restart Policies

services:
  web:
    restart: unless-stopped  # Auto-restart on failure

6. Separate Networks

networks:
  frontend:  # Public-facing services
  backend:   # Internal services (database, cache)

7. Order Services Properly

services:
  db:        # Database first
  api:       # API depends on db
    depends_on:
      - db
  web:       # Web depends on api
    depends_on:
      - api

Summary

What We Learned:

✅ What Docker Compose is and why it's useful
✅ docker-compose.yml file structure
✅ Service configuration options
✅ Building multi-container applications
✅ Docker Compose commands
✅ Environment variables and .env files
✅ Profiles for different scenarios
✅ Healthchecks
✅ Networks and volumes in Compose
✅ Real-world examples
✅ Best practices

Key Takeaways:

1. Docker Compose = Multi-container management tool
2. One YAML file describes entire application
3. Single command to start/stop everything
4. Automatic networking between services
5. Perfect for development and simple deployments
6. Use service names for container communication
7. Always use .env for sensitive data
8. Add healthchecks for reliable startups

Common Commands:

docker compose up -d          # Start in background
docker compose down           # Stop and remove
docker compose logs -f        # Follow logs
docker compose ps             # List services
docker compose exec api bash  # Access service shell
docker compose build          # Rebuild images
docker compose restart        # Restart services

🎉 Excellent! You now know Docker Compose!

You can now:

  • Manage multi-container applications easily
  • Define entire stacks in one file
  • Use Docker Compose for development
  • Deploy simple production applications

Docker Networking

Now let's learn how containers communicate with each other and the outside world.


Part 1: Understanding Container Networking Basics

The Networking Challenge

Simple Question: How do containers talk to each other?

Scenario:

You have:
├── Web application container (needs to talk to database)
├── Database container (needs to receive connections)
└── Redis cache container (needs to be accessed)

How do they communicate? 🤔

Container Isolation

Remember: Containers are ISOLATED

By default:
├── Each container has its own network stack
├── Own IP address
├── Own network interface
├── Cannot see other containers
└── Like separate computers on a network

Visual:

┌─────────────────┐  ┌─────────────────┐  ┌─────────────────┐
│  Container 1    │  │  Container 2    │  │  Container 3    │
│                 │  │                 │  │                 │
│  IP: 172.17.0.2 │  │  IP: 172.17.0.3 │  │  IP: 172.17.0.4 │
│                 │  │                 │  │                 │
└─────────────────┘  └─────────────────┘  └─────────────────┘
        ↑                    ↑                    ↑
        └────────────────────┴────────────────────┘
                    Docker Network

How Containers Access the Outside World

Your Computer (Host):

Your Computer:
├── IP: 192.168.1.100 (on your home network)
├── Can access internet
└── Runs Docker

Container inside:
├── Has own IP: 172.17.0.2
├── Can access internet through host
└── Uses NAT (Network Address Translation)

Visual:

Internet
   ↕
Your Computer (192.168.1.100)
   ↕
Docker Network (172.17.0.0/16)
   ↕
Containers (172.17.0.2, 172.17.0.3, ...)

Part 2: Default Bridge Network

What is the Bridge Network?

Bridge Network = Default network Docker creates

When you run a container without specifying network:

docker run nginx

It automatically connects to the "bridge" network.


Viewing Networks

List all networks:

docker network ls

Output:

NETWORK ID     NAME      DRIVER    SCOPE
a1b2c3d4e5f6   bridge    bridge    local
f6e5d4c3b2a1   host      host      local
1a2b3c4d5e6f   none      null      local

Three default networks:

bridge:
├── Default network
├── Containers can communicate
└── Most commonly used

host:
├── Container uses host's network
├── No isolation
└── Advanced use case

none:
├── No network
└── Completely isolated

Inspecting the Bridge Network

docker network inspect bridge

Output (simplified):

[
    {
        "Name": "bridge",
        "Driver": "bridge",
        "Scope": "local",
        "IPAM": {
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Containers": {
            "abc123...": {
                "Name": "container1",
                "IPv4Address": "172.17.0.2/16"
            }
        }
    }
]

Key Information:

Subnet: 172.17.0.0/16
└── IP range for containers

Gateway: 172.17.0.1
└── Docker's network gateway

Containers:
└── Lists all containers on this network

Testing Default Network

Run two containers:

# Container 1
docker run -d --name web1 nginx

# Container 2
docker run -d --name web2 nginx

Check their IPs:

docker inspect web1 | findstr IPAddress

Output:

"IPAddress": "172.17.0.2"
docker inspect web2 | findstr IPAddress

Output:

"IPAddress": "172.17.0.3"

Containers have different IPs on same network! ✓


Trying to Communicate (Default Bridge)

Access web1 from web2:

docker exec -it web2 bash

Inside web2 container:

# Try to ping web1 by IP
apt-get update && apt-get install -y iputils-ping
ping 172.17.0.2

# Output:
# PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
# 64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.073 ms
# ✓ Can reach by IP!

# Try to ping by name
ping web1

# Output:
# ping: web1: Name or service not known
# ✗ Cannot reach by name!

Important Discovery:

Default bridge network:
✓ Containers CAN communicate by IP address
✗ Containers CANNOT communicate by name
└── Must use IP addresses (not convenient!)

Part 3: Custom Bridge Networks

Why Create Custom Networks?

Custom networks provide:

✓ Automatic DNS resolution (use container names!)
✓ Better isolation
✓ More control
✓ Can create multiple networks
└── Best practice for multi-container apps

Creating a Custom Network

Syntax:

docker network create NETWORK_NAME

Example:

docker network create my-network

Output:

a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0
↑
Network ID

Verify:

docker network ls

Output:

NETWORK ID     NAME         DRIVER    SCOPE
a1b2c3d4e5f6   bridge       bridge    local
b2c3d4e5f6a7   my-network   bridge    local  ← Your new network!
f6e5d4c3b2a1   host         host      local
1a2b3c4d5e6f   none         null      local

Using Custom Network

Run containers on custom network:

# Container 1
docker run -d --name app1 --network my-network nginx

# Container 2
docker run -d --name app2 --network my-network nginx

Now test communication:

docker exec -it app2 bash

Inside app2:

# Install curl
apt-get update && apt-get install -y curl

# Access app1 by NAME!
curl http://app1

# Output:
# <!DOCTYPE html>
# <html>
# <head>
# <title>Welcome to nginx!</title>
# ...
# ✓ Works! Can use container name!

# Also try by IP
curl http://172.18.0.2

# ✓ Also works!

Magic! 🎉

Custom network provides:
✓ DNS resolution (container name → IP)
✓ No need to know IP addresses
✓ Use friendly names
└── Much easier to work with!

Real-World Example: Web App + Database

Scenario: Flask app needs to connect to MySQL

Step 1: Create custom network

docker network create app-network

Step 2: Run MySQL container

docker run -d \
  --name mysql-db \
  --network app-network \
  -e MYSQL_ROOT_PASSWORD=secret \
  -e MYSQL_DATABASE=myapp \
  mysql:8.0

Step 3: Create Python app

app.py:

import mysql.connector
import time

# Wait for MySQL to be ready
time.sleep(10)

# Connect using container name!
db = mysql.connector.connect(
    host="mysql-db",  # ← Container name!
    user="root",
    password="secret",
    database="myapp"
)

cursor = db.cursor()
cursor.execute("CREATE TABLE IF NOT EXISTS users (id INT, name VARCHAR(50))")
cursor.execute("INSERT INTO users VALUES (1, 'Alice')")
db.commit()

cursor.execute("SELECT * FROM users")
for row in cursor:
    print(f"User: {row[1]}")

db.close()

requirements.txt:

mysql-connector-python

Dockerfile:

FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY app.py .
CMD ["python", "app.py"]

Step 4: Build and run app

docker build -t my-app .

docker run --network app-network my-app

Output:

User: Alice

✓ App connected to database using container name!


Part 4: Port Publishing (Port Mapping)

Understanding Port Publishing

Problem:

Container running web server on port 80
├── Port 80 INSIDE container
├── Your computer can't access it
└── External users can't access it

Solution: Port Publishing

Map container port to host port:
Container port 80 → Host port 8080

Now:
├── Access localhost:8080 on your computer
│       ↓
├── Traffic goes to container port 80
└── Web server accessible! ✓

Port Publishing Syntax

Syntax:

docker run -p HOST_PORT:CONTAINER_PORT IMAGE

Examples:

# Map port 8080 to 80
docker run -p 8080:80 nginx

# Map port 3000 to 3000
docker run -p 3000:3000 node-app

# Map port 5432 to 5432
docker run -p 5432:5432 postgres

Multiple Port Mappings

docker run -p 8080:80 -p 8443:443 nginx
#          ↑            ↑
#     HTTP port    HTTPS port

Viewing Port Mappings

docker ps

Output:

CONTAINER ID   IMAGE   PORTS                                   NAMES
abc123def456   nginx   0.0.0.0:8080->80/tcp                   web
                       ↑       ↑    ↑  ↑
                       │       │    │  └── Protocol
                       │       │    └── Container port
                       │       └── Host port
                       └── Listen on all interfaces

Port Binding to Specific Interface

Bind to all interfaces (default):

docker run -p 8080:80 nginx
# Accessible from anywhere

Bind to localhost only:

docker run -p 127.0.0.1:8080:80 nginx
# Only accessible from this computer
# Not accessible from network

Automatic Port Assignment

Let Docker choose the port:

docker run -P nginx
#          ↑
#     Capital P

Docker assigns random port:

docker ps

Output:

PORTS
0.0.0.0:32768->80/tcp
        ↑
   Random port assigned

Access: http://localhost:32768


Part 5: Network Types in Detail

1. Bridge Network (Default)

What it is:

Default network type
├── Software bridge
├── Containers on same bridge can communicate
└── Most common type

When to use:

✓ Single host
✓ Multiple containers need to communicate
✓ Standard web applications

Example:

docker network create --driver bridge my-bridge
docker run --network my-bridge nginx

2. Host Network

What it is:

Container uses host's network directly
├── No network isolation
├── Container shares host's IP
└── Better performance (no NAT)

Example:

docker run --network host nginx

What happens:

Container:
├── Uses host's IP address
├── Port 80 on container = Port 80 on host
├── No port mapping needed
└── Cannot run multiple containers on same port

When to use:

✓ Need maximum network performance
✓ Network debugging
✗ Less isolation (security concern)

3. None Network

What it is:

No network at all
├── Completely isolated
├── No internet access
└── No container communication

Example:

docker run --network none nginx

When to use:

✓ Maximum isolation
✓ Security-critical containers
✓ Batch processing (no network needed)

4. Overlay Network (Advanced)

What it is:

Connects containers across multiple Docker hosts
├── For Docker Swarm
├── Multi-host networking
└── Advanced orchestration

Example:

docker network create --driver overlay my-overlay

When to use:

✓ Docker Swarm mode
✓ Multiple servers
✓ Distributed applications

Part 6: Connecting Containers to Multiple Networks

Container on Multiple Networks

A container can be on multiple networks!

Example:

# Create two networks
docker network create frontend
docker network create backend

# Run database on backend only
docker run -d --name db --network backend mysql

# Run API on both networks
docker run -d --name api --network backend nginx
docker network connect frontend api

# Run web on frontend only
docker run -d --name web --network frontend nginx

Result:

frontend network:
├── api ✓
└── web ✓

backend network:
├── api ✓
└── db ✓

Communication:
├── web → api (via frontend) ✓
├── api → db (via backend) ✓
├── web → db ✗ (not on same network)
└── Isolation achieved! ✓

Connecting/Disconnecting Networks

Connect container to network:

docker network connect NETWORK_NAME CONTAINER_NAME

Disconnect container from network:

docker network disconnect NETWORK_NAME CONTAINER_NAME

Example:

# Connect web to backend
docker network connect backend web

# Now web can access db!

# Disconnect web from backend
docker network disconnect backend web

# web can no longer access db

Part 7: DNS and Service Discovery

Automatic DNS Resolution

Custom networks provide automatic DNS:

Container names = Hostnames

my-app container:
├── Can be reached at: my-app
├── Can be reached at: my-app.my-network
└── Automatic DNS resolution

Testing DNS Resolution

Run containers:

docker network create test-net
docker run -d --name server1 --network test-net nginx
docker run -it --name client --network test-net ubuntu bash

Inside client:

# Install tools
apt-get update && apt-get install -y dnsutils curl

# Test DNS resolution
nslookup server1

# Output:
# Server:         127.0.0.11
# Address:        127.0.0.11#53
# 
# Name:   server1
# Address: 172.18.0.2

# ✓ Container name resolved to IP!

# Access server
curl http://server1
# ✓ Works!

Network Aliases

Give containers additional names:

docker run -d \
  --name mysql-db \
  --network app-net \
  --network-alias database \
  --network-alias db \
  mysql

Now can access as:

mysql-db   (container name)
database   (alias)
db         (alias)

Useful for:

✓ Backward compatibility
✓ Multiple names for same service
✓ Clearer naming

Part 8: Network Isolation Patterns

Pattern 1: Multi-Tier Application

Structure:

┌─────────────────────────────────────┐
│         frontend network             │
│  ┌──────────┐      ┌──────────┐    │
│  │   Web    │──────│   API    │    │
│  └──────────┘      └──────────┘    │
└─────────────────────────┬───────────┘
                          │
┌─────────────────────────┴───────────┐
│         backend network              │
│  ┌──────────┐      ┌──────────┐    │
│  │   API    │──────│ Database │    │
│  └──────────┘      └──────────┘    │
└─────────────────────────────────────┘

Isolation:
├── Web can only talk to API
├── API can talk to both
└── Database is hidden from Web

Implementation:

# Create networks
docker network create frontend
docker network create backend

# Database (backend only)
docker run -d --name db --network backend postgres

# API (both networks)
docker run -d --name api --network backend my-api
docker network connect frontend api

# Web (frontend only)
docker run -d --name web --network frontend -p 80:80 nginx

Pattern 2: Microservices Isolation

Each service on own network:

┌────────────┐  ┌────────────┐  ┌────────────┐
│  Service A │  │  Service B │  │  Service C │
│  Network A │  │  Network B │  │  Network C │
└────────────┘  └────────────┘  └────────────┘
       ↓               ↓               ↓
   ┌────────────────────────────────────┐
   │       API Gateway Network          │
   │         (Common Network)           │
   └────────────────────────────────────┘

Part 9: Practical Multi-Container Application

Building a Complete App

Let's build: Web App + API + Database

Step 1: Create networks

docker network create frontend
docker network create backend

Step 2: Run PostgreSQL

docker run -d \
  --name postgres \
  --network backend \
  -e POSTGRES_PASSWORD=secret \
  -e POSTGRES_DB=myapp \
  postgres:15

Step 3: Create API (api.py)

from flask import Flask, jsonify
import psycopg2
import os

app = Flask(__name__)

def get_db():
    return psycopg2.connect(
        host="postgres",  # Container name!
        database="myapp",
        user="postgres",
        password="secret"
    )

@app.route('/api/users')
def get_users():
    db = get_db()
    cursor = db.cursor()
    cursor.execute("SELECT version()")
    version = cursor.fetchone()
    return jsonify({"database": version[0]})

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

requirements.txt:

flask
psycopg2-binary

Dockerfile:

FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY api.py .
CMD ["python", "api.py"]

Build and run:

docker build -t my-api .

docker run -d \
  --name api \
  --network backend \
  my-api

# Connect to frontend too
docker network connect frontend api

Step 4: Create Web Frontend (index.html)

<!DOCTYPE html>
<html>
<head>
    <title>My App</title>
</head>
<body>
    <h1>Multi-Container App</h1>
    <button onclick="fetchData()">Get Database Info</button>
    <pre id="result"></pre>
    
    <script>
        async function fetchData() {
            const response = await fetch('http://localhost:5000/api/users');
            const data = await response.json();
            document.getElementById('result').textContent = JSON.stringify(data, null, 2);
        }
    </script>
</body>
</html>

nginx.conf:

server {
    listen 80;
    
    location / {
        root /usr/share/nginx/html;
        index index.html;
    }
    
    location /api/ {
        proxy_pass http://api:5000/api/;
    }
}

Dockerfile:

FROM nginx:alpine
COPY index.html /usr/share/nginx/html/
COPY nginx.conf /etc/nginx/conf.d/default.conf

Build and run:

docker build -t my-web .

docker run -d \
  --name web \
  --network frontend \
  -p 8080:80 \
  my-web

Step 5: Test the application

Open browser: http://localhost:8080

Click button → Data from database! ✓

Architecture:

User Browser
     ↓
Web Container (frontend network)
     ↓
API Container (frontend + backend networks)
     ↓
Database Container (backend network)

✓ Web can't directly access database
✓ API bridges the networks
✓ Proper isolation!

Part 10: Network Commands Reference

Complete Network Commands

# List networks
docker network ls

# Create network
docker network create NETWORK_NAME

# Inspect network
docker network inspect NETWORK_NAME

# Remove network
docker network rm NETWORK_NAME

# Remove all unused networks
docker network prune

# Connect container to network
docker network connect NETWORK_NAME CONTAINER_NAME

# Disconnect container from network
docker network disconnect NETWORK_NAME CONTAINER_NAME

Creating Networks with Options

# Create with custom subnet
docker network create --subnet=192.168.1.0/24 my-net

# Create with custom gateway
docker network create --gateway=192.168.1.1 my-net

# Create with driver
docker network create --driver bridge my-net

# Create with labels
docker network create --label env=prod my-net

Summary

What We Learned:

✅ Container networking basics
✅ Default bridge network
✅ Custom bridge networks
✅ DNS resolution in custom networks
✅ Port publishing/mapping
✅ Network types (bridge, host, none)
✅ Multiple networks per container
✅ Network isolation patterns
✅ Multi-container applications
✅ Service discovery

Key Concepts:

1. Use custom networks for container communication
2. Container names = Hostnames (with custom networks)
3. Port publishing exposes containers to outside
4. Multiple networks = Network isolation
5. Frontend/Backend pattern for security

Best Practices:

✓ Always use custom networks (not default bridge)
✓ Use container names (not IP addresses)
✓ Separate frontend/backend networks
✓ Only publish ports that need external access
✓ Use network aliases for flexibility

Excellent! You now understand Docker networking!

This completes Phase 6: Docker Networking!

🎉 Congratulations! You've completed the Basic Docker Roadmap!

You now know:

  • ✅ Phase 1: Understanding Docker (Why, What, Architecture)
  • ✅ Phase 2: Installation & First Steps
  • ✅ Phase 3: Working with Images
  • ✅ Phase 4: Creating Your Own Images (Dockerfile)
  • ✅ Phase 5: Container Data Management (Volumes)
  • ✅ Phase 6: Docker Networking

Container Data Management (Volumes)

Now let's learn about one of the most important topics in Docker - how to manage data in containers.


Part 1: The Container Data Problem

Understanding Container Filesystem

Important Concept: Containers are EPHEMERAL (temporary)

What does this mean?

When you create a container:
├── It has its own filesystem
├── You can create/modify files inside
├── Everything works normally

When you delete the container:
├── ALL data inside is lost! ✗
├── Files gone forever
└── No way to recover

Demonstrating the Problem

Let's see this in action!

Step 1: Run Ubuntu container and create a file

docker run -it --name test-container ubuntu bash

Inside the container:

# You're now inside Ubuntu container
# Create a file
echo "Important data!" > /data.txt

# Verify it exists
cat /data.txt
# Output: Important data!

# Exit container
exit

Step 2: Start the same container again

docker start test-container
docker exec -it test-container bash

Inside container:

# Check if file still exists
cat /data.txt
# Output: Important data!

# File is still there! ✓
exit

Step 3: Remove and create new container

# Remove the container
docker rm test-container

# Create a new container (same image)
docker run -it --name test-container2 ubuntu bash

Inside new container:

# Try to find the file
cat /data.txt
# Error: No such file or directory ✗

# File is GONE! ✗

What happened?

Container 1:
├── Created data.txt
├── Data stored in container's writable layer
└── Removed → Data lost forever! ✗

Container 2:
├── Fresh container from same image
├── No data from Container 1
└── Starting from scratch

Problem: Data is tied to container lifecycle!

Real-World Problem Scenarios

Scenario 1: Database Container

Run MySQL container:
├── Create database
├── Add tables
├── Insert 1000 customer records

Container crashes:
├── Restart container → Data still there ✓

Accidentally delete container:
├── All data GONE! ✗
├── 1000 customer records lost!
└── Disaster! ✗

Scenario 2: Web Application

Upload feature:
├── Users upload photos
├── Photos saved in /uploads/ inside container

Update application (new container):
├── Deploy new version
├── Remove old container
├── All uploaded photos GONE! ✗
└── Users angry! ✗

Scenario 3: Log Files

Application writes logs:
├── Debug logs in /var/log/app/
├── Error logs accumulating

Container deleted:
├── All logs lost ✗
├── Can't debug past issues ✗
└── No audit trail ✗

The Solution: Docker Volumes

Volumes = Persistent storage outside the container

Think of it as:

Container = Temporary hotel room
├── You stay there temporarily
├── When you check out, room is cleaned
└── Your stuff is gone

Volume = Your storage unit
├── Permanent storage space
├── Exists outside the hotel
├── Your stuff stays even after checkout
└── Can access from any room (container)

Visual:

WITHOUT Volumes:
┌──────────────────┐
│   Container      │
│                  │
│  /data/          │ ← Data inside
│  └── files       │
└──────────────────┘
     ↓ Delete
    Data lost! ✗


WITH Volumes:
┌──────────────────┐     ┌──────────────┐
│   Container      │     │   Volume     │
│                  │────→│              │
│  /data/ (mount)  │     │  Real data   │
│                  │     │  stored here │
└──────────────────┘     └──────────────┘
     ↓ Delete                   ↓
  Container gone            Data safe! ✓

Part 2: What are Docker Volumes?

Simple Definition

Volume = A storage space managed by Docker that exists outside containers

Key Characteristics:

Volumes are:
├── Persistent (survive container deletion)
├── Managed by Docker
├── Can be shared between containers
├── Independent of container lifecycle
├── Stored on host machine
└── Easy to backup

How Volumes Work

Conceptual Model:

Host Machine (Your Computer):
├── Docker manages a special directory
├── /var/lib/docker/volumes/ (Linux)
├── This is where volume data is stored

Volume:
├── Named storage space
├── Like a hard drive managed by Docker

Container:
├── Mounts (connects to) the volume
├── Sees volume as a directory
└── Reads/writes to volume = permanent storage

Visual:

Your Computer Filesystem:
/var/lib/docker/volumes/
├── my-volume/
│   └── _data/
│       ├── file1.txt
│       └── file2.txt
└── db-volume/
    └── _data/
        └── database.db

Container:
/app/data/ ──(mounted)──→ my-volume
                          ↓
                    Actual storage location

Part 3: Creating and Using Volumes

Creating a Volume

Syntax:

docker volume create VOLUME_NAME

Example:

docker volume create my-data

Output:

my-data

That's it! Volume created! ✓


Listing Volumes

docker volume ls

Output:

DRIVER    VOLUME NAME
local     my-data

Inspecting a Volume

docker volume inspect my-data

Output:

[
    {
        "CreatedAt": "2026-02-24T10:30:00Z",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/my-data/_data",
        "Name": "my-data",
        "Options": {},
        "Scope": "local"
    }
]

Important field:

Mountpoint: "/var/lib/docker/volumes/my-data/_data"
                    ↑
            Where data is actually stored on your computer

Using a Volume with Container

Mount volume when running container:

Syntax:

docker run -v VOLUME_NAME:/path/in/container IMAGE

Example:

docker run -it -v my-data:/data ubuntu bash

What this does:

-v my-data:/data
   ↑       ↑
   │       └── Path inside container
   └── Volume name

Container sees /data/ directory
/data/ is actually stored in my-data volume
Data persists even after container is deleted!

Practical Example: Persistent Data

Let's see volumes in action!

Step 1: Create a volume

docker volume create persistent-data

Step 2: Run container with volume

docker run -it --name container1 -v persistent-data:/data ubuntu bash

Inside container:

# Create some files
echo "This data will persist!" > /data/important.txt
echo "User database" > /data/users.db
echo "Configuration" > /data/config.json

# List files
ls /data/
# Output: important.txt  users.db  config.json

# Exit
exit

Step 3: Delete the container

docker rm container1

Step 4: Create NEW container with SAME volume

docker run -it --name container2 -v persistent-data:/data ubuntu bash

Inside new container:

# Check if data exists
ls /data/
# Output: important.txt  users.db  config.json

cat /data/important.txt
# Output: This data will persist!

# Data is still there! ✓
# Even though we deleted container1!

🎉 Volume preserved the data!


Multiple Containers Sharing a Volume

Volumes can be shared between containers!

Terminal 1:

docker run -it --name writer -v shared-data:/data ubuntu bash

Inside writer container:

# Write data
echo "Message from writer" > /data/message.txt

# Keep container running
sleep infinity

Terminal 2 (new terminal):

docker run -it --name reader -v shared-data:/data ubuntu bash

Inside reader container:

# Read data written by writer
cat /data/message.txt
# Output: Message from writer

# Data shared between containers! ✓

Use case:

Example: Microservices sharing data

Container 1 (Producer):
└── Writes log files to /logs

Container 2 (Analyzer):
└── Reads log files from /logs

Both mount same volume:
└── Data flows between them! ✓

Part 4: Bind Mounts

What are Bind Mounts?

Bind Mount = Mount a directory from YOUR computer into a container

Difference from Volumes:

Volume:
├── Managed by Docker
├── Stored in Docker's directory
└── docker volume create my-vol

Bind Mount:
├── You choose the directory
├── Any directory on your computer
└── Mount your own folder

Visual:

Volume (Managed by Docker):
Your Computer                Container
Docker manages:             
/var/lib/docker/volumes/    
└── my-vol/_data/     ────→ /data/
    └── files               

Bind Mount (You manage):
Your Computer                Container
Your directory:
C:\Users\You\project\       
└── code/             ────→ /app/
    └── files

Creating Bind Mounts

Syntax:

docker run -v /host/path:/container/path IMAGE

Windows example:

docker run -v C:\Users\YourName\myapp:/app ubuntu

Absolute path required!


Practical Example: Development Workflow

This is EXTREMELY useful for development!

Scenario: Developing a Python app

Step 1: Create project directory

mkdir C:\Users\YourName\python-app
cd C:\Users\YourName\python-app

Step 2: Create app.py

# app.py
print("Hello from Python!")
print("Version 1.0")

Step 3: Run with bind mount

docker run -it -v C:\Users\YourName\python-app:/app python:3.11 bash

Inside container:

cd /app
ls
# Output: app.py

python app.py
# Output: 
# Hello from Python!
# Version 1.0

# Keep container running
sleep infinity

Step 4: Edit file on YOUR computer (not in container)

Open app.py in your editor and change it:

# app.py
print("Hello from Python!")
print("Version 2.0 - Updated!")
print("New feature added!")

Save the file

Step 5: Run again in container (same container still running)

# Still inside the container
python app.py
# Output:
# Hello from Python!
# Version 2.0 - Updated!
# New feature added!

# Changes reflected immediately! ✓

What happened?

File on your computer:
C:\Users\YourName\python-app\app.py
                    ↓
                (bind mount)
                    ↓
File in container:
/app/app.py

They're the SAME file!
Edit on computer → Changes in container immediately! ✓

Bind Mount Benefits for Development

Traditional development:

Without bind mount:
1. Edit code on computer
2. Copy code into container (slow)
3. Test
4. Find bug
5. Exit container
6. Edit code again
7. Copy into container again (slow)
8. Repeat... ✗

With bind mount:

With bind mount:
1. Edit code on computer
2. Changes instantly in container ✓
3. Test immediately
4. Edit again
5. Test immediately
6. Fast iteration! ✓

Modern Bind Mount Syntax

Old syntax:

docker run -v C:\path:/container/path image

New syntax (recommended):

docker run --mount type=bind,source=C:\path,target=/container/path image

Example:

docker run --mount type=bind,source=C:\Users\YourName\myapp,target=/app python:3.11

Both work, but --mount is more explicit and clear.


Part 5: Volume vs Bind Mount - When to Use What?

Comparison

┌─────────────────────────────────────────────────────┐
│              VOLUMES vs BIND MOUNTS                 │
├─────────────────────────────────────────────────────┤
│                                                     │
│  VOLUMES:                                          │
│  ✓ Managed by Docker                              │
│  ✓ Better for production                          │
│  ✓ Works on all platforms                         │
│  ✓ Easy to backup                                 │
│  ✓ Can be shared easily                           │
│  ✗ Need docker volume commands to manage          │
│                                                     │
│  BIND MOUNTS:                                      │
│  ✓ Direct access to files                         │
│  ✓ Great for development                          │
│  ✓ Easy to edit files                             │
│  ✓ No docker commands needed                      │
│  ✗ Path must exist on host                        │
│  ✗ Platform-specific paths                        │
│                                                     │
└─────────────────────────────────────────────────────┘

When to Use Volumes

Use volumes for:

✓ Database data
  └── MySQL, PostgreSQL, MongoDB

✓ Production data
  └── Uploaded files, generated reports

✓ Data that must persist
  └── User data, configurations

✓ Shared data between containers
  └── Microservices communication

✓ Backups
  └── Easy to backup entire volume

Example: Database

docker run -d \
  --name mysql-db \
  -v mysql-data:/var/lib/mysql \
  -e MYSQL_ROOT_PASSWORD=secret \
  mysql:8.0

When to Use Bind Mounts

Use bind mounts for:

✓ Development
  └── Edit code on computer, test in container

✓ Configuration files
  └── nginx.conf, app config

✓ Source code during development
  └── Live reload

✓ When you need direct file access
  └── Easy to edit/view files

Example: Development

docker run -d \
  -v C:\Users\You\myapp:/app \
  -p 5000:5000 \
  python:3.11 \
  python /app/app.py

Part 6: Real-World Examples

Example 1: MySQL Database with Volume

Run MySQL with persistent data:

# Create volume for database
docker volume create mysql-data

# Run MySQL container
docker run -d \
  --name mysql-db \
  -v mysql-data:/var/lib/mysql \
  -e MYSQL_ROOT_PASSWORD=mypassword \
  -e MYSQL_DATABASE=myapp \
  -p 3306:3306 \
  mysql:8.0

What happens:

Container created:
├── MySQL running
├── Creates database files
└── Stored in mysql-data volume

Stop/Remove container:
├── Container gone
└── Data safe in volume ✓

Start new container with same volume:
├── All databases restored ✓
└── No data loss ✓

Test it:

# Connect to MySQL
docker exec -it mysql-db mysql -uroot -pmypassword

# Inside MySQL:
CREATE TABLE users (id INT, name VARCHAR(50));
INSERT INTO users VALUES (1, 'Alice');
SELECT * FROM users;
# Data created ✓

exit

# Remove container
docker stop mysql-db
docker rm mysql-db

# Create new container with same volume
docker run -d \
  --name mysql-db-new \
  -v mysql-data:/var/lib/mysql \
  -e MYSQL_ROOT_PASSWORD=mypassword \
  mysql:8.0

# Wait 10 seconds for MySQL to start
# Connect again
docker exec -it mysql-db-new mysql -uroot -pmypassword myapp

# Check data
SELECT * FROM users;
# Output: 1 | Alice
# Data persisted! ✓

Example 2: Web App Development with Bind Mount

Create a simple web app:

Directory structure:

my-website/
├── index.html
├── style.css
└── app.js

index.html:

<!DOCTYPE html>
<html>
<head>
    <title>My App</title>
    <link rel="stylesheet" href="style.css">
</head>
<body>
    <h1>Hello Docker!</h1>
    <p id="message">Loading...</p>
    <script src="app.js"></script>
</body>
</html>

style.css:

body {
    font-family: Arial;
    background-color: #f0f0f0;
    padding: 20px;
}

app.js:

document.getElementById('message').textContent = 'Version 1.0';

Run with bind mount:

docker run -d \
  --name web-dev \
  -v C:\Users\YourName\my-website:/usr/share/nginx/html \
  -p 8080:80 \
  nginx:alpine

Access: http://localhost:8080

Now edit files on your computer:

Change app.js:

document.getElementById('message').textContent = 'Version 2.0 - UPDATED!';

Refresh browser → Changes appear immediately! ✓

No need to rebuild or restart container!


Example 3: Sharing Data Between Containers

Scenario: Log producer and analyzer

Create shared volume:

docker volume create shared-logs

Container 1: Producer (generates logs)

docker run -d \
  --name log-producer \
  -v shared-logs:/logs \
  ubuntu \
  bash -c "while true; do echo \"Log entry: $(date)\" >> /logs/app.log; sleep 5; done"

Container 2: Analyzer (reads logs)

docker run -it \
  --name log-analyzer \
  -v shared-logs:/logs \
  ubuntu \
  bash

Inside analyzer:

# Watch logs in real-time
tail -f /logs/app.log

# Output:
# Log entry: Mon Feb 24 10:30:00 UTC 2026
# Log entry: Mon Feb 24 10:30:05 UTC 2026
# Log entry: Mon Feb 24 10:30:10 UTC 2026
# ... keeps updating

# Both containers accessing same volume! ✓

Part 7: Volume Commands Reference

Complete Volume Commands

# Create volume
docker volume create VOLUME_NAME

# List volumes
docker volume ls

# Inspect volume (see details)
docker volume inspect VOLUME_NAME

# Remove volume
docker volume rm VOLUME_NAME

# Remove all unused volumes
docker volume prune

# Remove volume with force
docker volume rm -f VOLUME_NAME

Using Volumes with Containers

# Run with named volume
docker run -v VOLUME_NAME:/path IMAGE

# Run with bind mount (absolute path)
docker run -v /host/path:/container/path IMAGE

# Run with bind mount (current directory)
docker run -v ${PWD}:/app IMAGE

# Multiple volumes
docker run -v vol1:/data1 -v vol2:/data2 IMAGE

# Read-only volume
docker run -v VOLUME_NAME:/path:ro IMAGE
# :ro = read-only

Modern Mount Syntax

# Volume mount
docker run --mount type=volume,source=VOLUME_NAME,target=/path IMAGE

# Bind mount
docker run --mount type=bind,source=/host/path,target=/path IMAGE

# Read-only mount
docker run --mount type=volume,source=VOL,target=/path,readonly IMAGE

Part 8: Anonymous Volumes

What are Anonymous Volumes?

Anonymous Volume = Volume without a name

Created automatically by Docker when you don't specify name:

docker run -v /data ubuntu
#             ↑
#        No name = anonymous volume

Docker generates random name:

VOLUME NAME
a1b2c3d4e5f6...

When Anonymous Volumes are Used

Example: Some images create anonymous volumes by default

# In Dockerfile
VOLUME /data

When container runs, creates anonymous volume automatically.


Problem with Anonymous Volumes

docker run image1
# Creates anonymous volume: abc123

docker run image1
# Creates ANOTHER anonymous volume: def456

docker run image1
# Creates ANOTHER anonymous volume: ghi789

Result:
├── 3 containers
├── 3 anonymous volumes
└── Hard to manage! ✗

Better: Use named volumes!

docker run -v my-data:/data image
# Same named volume reused ✓

Part 9: Backing Up and Restoring Volumes

Backup a Volume

Method: Use a temporary container to tar the volume

# Backup volume to tar file
docker run --rm -v VOLUME_NAME:/data -v ${PWD}:/backup ubuntu tar czf /backup/backup.tar.gz /data

Explanation:

--rm                        = Remove container after done
-v VOLUME_NAME:/data       = Mount volume to backup
-v ${PWD}:/backup          = Mount current directory
ubuntu                      = Use Ubuntu image
tar czf /backup/backup.tar.gz /data = Compress /data to backup file

Example:

# Backup mysql-data volume
docker run --rm \
  -v mysql-data:/data \
  -v C:\Users\You\backups:/backup \
  ubuntu \
  tar czf /backup/mysql-backup.tar.gz /data

Restore a Volume

# Create new volume
docker volume create restored-data

# Restore from backup
docker run --rm \
  -v restored-data:/data \
  -v ${PWD}:/backup \
  ubuntu \
  bash -c "cd /data && tar xzf /backup/backup.tar.gz --strip 1"

Part 10: Cleaning Up Volumes

Remove Single Volume

# Must stop/remove containers using it first
docker volume rm VOLUME_NAME

If volume is in use:

Error: volume is in use

Solution:
1. docker ps -a (find containers using volume)
2. docker rm CONTAINER (remove those containers)
3. docker volume rm VOLUME_NAME (now works)

Remove All Unused Volumes

docker volume prune

Output:

WARNING! This will remove all local volumes not used by at least one container.
Are you sure you want to continue? [y/N] y

Deleted Volumes:
volume1
volume2
anonymous-volume-abc123

Total reclaimed space: 2.5GB

Be careful! This removes data permanently!


Practice Exercises

Exercise 1: Persistent Counter

Create a container that counts:

# Create volume
docker volume create counter-data

# Run container
docker run -it -v counter-data:/data ubuntu bash

Inside container:

# Create counter file
echo "0" > /data/count.txt

# Increment counter
COUNT=$(cat /data/count.txt)
COUNT=$((COUNT + 1))
echo $COUNT > /data/count.txt
echo "Count: $COUNT"

exit

Run again (multiple times):

docker run -it -v counter-data:/data ubuntu bash

Inside container:

# Increment and show
COUNT=$(cat /data/count.txt)
COUNT=$((COUNT + 1))
echo $COUNT > /data/count.txt
echo "Count: $COUNT"

Each time, count increases! Data persists! ✓


Exercise 2: Development Environment

Setup:

# Create project directory
mkdir my-python-project
cd my-python-project

# Create app.py
echo print("Hello!") > app.py

# Run with bind mount
docker run -it -v ${PWD}:/app python:3.11 bash

Inside container:

cd /app
python app.py

Now edit app.py on your computer, run again in container → See changes!


Summary

What We Learned:

✅ Container data is temporary by default
✅ Volumes provide persistent storage
✅ Three types of mounts:
   ├── Named volumes (managed by Docker)
   ├── Anonymous volumes (random names)
   └── Bind mounts (your directories)
✅ When to use volumes vs bind mounts
✅ Creating and managing volumes
✅ Sharing data between containers
✅ Backing up and restoring
✅ Real-world examples

Key Takeaways:

Volumes:
├── Use for production data
├── Database storage
├── Persistent application data
└── Easy backups

Bind Mounts:
├── Use for development
├── Source code
├── Configuration files
└── Direct file access

Creating Your Own Images (Dockerfile)

Now we'll learn how to create your own custom Docker images using Dockerfiles. This is where Docker becomes really powerful!


Part 1: What is a Dockerfile?

Simple Definition

Dockerfile = A text file containing instructions to build a Docker image.

Think of it as:

Recipe Card (Dockerfile):
├── List ingredients (base image)
├── Preparation steps (RUN commands)
├── Add your items (COPY files)
├── Cooking instructions (CMD)
└── Final dish (your custom image)

Following recipe → Creates the dish
Reading Dockerfile → Builds the image

Another Analogy:

Construction Blueprint (Dockerfile):
├── Foundation type (FROM)
├── Building materials (RUN install packages)
├── Interior design (COPY your files)
├── Final touches (CMD)
└── Complete building (custom image)

Why Create Custom Images?

Instead of using existing images, you create custom ones to:

1. Package your own application
   └── Your code + environment together

2. Customize existing images
   └── Add tools/packages you need

3. Create reproducible environments
   └── Same setup everywhere

4. Share with team
   └── Everyone uses same environment

5. Deploy applications
   └── Production-ready packages

Dockerfile Basics

A Dockerfile is just a text file named Dockerfile (no extension).

Simple example:

FROM ubuntu:22.04
RUN apt-get update
RUN apt-get install -y python3
COPY app.py /app/
CMD ["python3", "/app/app.py"]

What this does:

Line 1: Start with Ubuntu 22.04 as base
Line 2: Update package lists
Line 3: Install Python 3
Line 4: Copy your app.py file into image
Line 5: Run your app when container starts

Part 2: Dockerfile Instructions - FROM

FROM - The Base Image

FROM = Starting point for your image

Syntax:

FROM image:tag

Examples:

# Start with Ubuntu
FROM ubuntu:22.04

# Start with Python already installed
FROM python:3.11

# Start with Node.js
FROM node:18

# Start with minimal Alpine Linux
FROM alpine:3.18

# Start from scratch (empty image)
FROM scratch

Understanding FROM

Every Dockerfile MUST start with FROM:

FROM ubuntu:22.04
# ↑
# This is always the first instruction
# (except for ARG, which we'll learn later)

Why use different base images?

Choose based on needs:

Need Python?
FROM python:3.11
└── Python already installed ✓

Need Node.js?
FROM node:18
└── Node.js already installed ✓

Want minimal size?
FROM alpine:3.18
└── Smallest base (5MB) ✓

Want full Ubuntu?
FROM ubuntu:22.04
└── More packages available ✓

Starting from scratch?
FROM scratch
└── Build everything yourself

Example: Different Base Images

Example 1: Start with Ubuntu, install Python

FROM ubuntu:22.04
RUN apt-get update && apt-get install -y python3
# Manual installation

Example 2: Start with Python already

FROM python:3.11
# Python already included! ✓

Which is better?

Option 2 (FROM python:3.11) because:
├── Python pre-configured correctly
├── Includes pip and other tools
├── Follows best practices
└── Less code to write

Unless you need specific Ubuntu features,
use official language images! ✓

Part 3: Dockerfile Instructions - RUN

RUN - Execute Commands During Build

RUN = Run commands when building the image

Syntax:

RUN command

Examples:

# Install packages (Ubuntu/Debian)
RUN apt-get update && apt-get install -y curl

# Install packages (Alpine)
RUN apk add --no-cache curl

# Install Python packages
RUN pip install flask

# Create directories
RUN mkdir -p /app/data

# Download files
RUN curl -O https://example.com/file.zip

RUN - Important Concepts

Each RUN creates a new layer:

FROM ubuntu:22.04
RUN apt-get update           # Layer 1
RUN apt-get install -y curl  # Layer 2
RUN apt-get install -y vim   # Layer 3

Better approach - combine commands:

FROM ubuntu:22.04
RUN apt-get update && \
    apt-get install -y curl vim
# Single layer ✓

Why combine?

Separate RUN commands:
├── Layer 1: apt-get update (50MB)
├── Layer 2: install curl (10MB)
├── Layer 3: install vim (30MB)
└── Total: 90MB + overhead

Combined RUN:
└── Layer 1: update + installs (60MB)
└── Total: 60MB (smaller!) ✓

RUN Examples for Different Languages

Python:

FROM python:3.11
RUN pip install flask sqlalchemy requests

Node.js:

FROM node:18
RUN npm install express mongoose

System packages:

FROM ubuntu:22.04
RUN apt-get update && apt-get install -y \
    git \
    curl \
    vim \
    && rm -rf /var/lib/apt/lists/*
# ↑ Cleanup to reduce size

Part 4: Dockerfile Instructions - COPY and ADD

COPY - Copy Files from Host to Image

COPY = Copy files from your computer into the image

Syntax:

COPY source destination

Examples:

# Copy single file
COPY app.py /app/

# Copy all files in current directory
COPY . /app/

# Copy multiple files
COPY app.py config.json /app/

# Copy directory
COPY ./src /app/src/

Understanding COPY

Visual:

Your Computer:              Docker Image:
┌─────────────────┐        ┌──────────────┐
│ my-project/     │        │              │
│ ├── app.py      │  COPY  │ /app/        │
│ ├── config.json │───────→│ ├── app.py   │
│ └── data/       │        │ └── config   │
└─────────────────┘        └──────────────┘

Example Dockerfile:

FROM python:3.11

# Create app directory in image
RUN mkdir /app

# Copy your files into image
COPY app.py /app/
COPY requirements.txt /app/

# Copy everything
COPY . /app/

ADD vs COPY

ADD = Like COPY but with extra features

# COPY: Just copies files
COPY app.py /app/

# ADD: Copies AND extracts archives
ADD archive.tar.gz /app/
# ↑ Automatically extracts!

# ADD: Can download from URL
ADD https://example.com/file.txt /app/

Which to use?

Use COPY (recommended):
├── Simpler
├── More predictable
└── Best practice ✓

Use ADD only when:
├── Need to extract archives
└── Need to download from URL

Docker best practice: Prefer COPY

Part 5: Dockerfile Instructions - WORKDIR

WORKDIR - Set Working Directory

WORKDIR = Set the current directory inside the image

Syntax:

WORKDIR /path/to/directory

Without WORKDIR:

FROM ubuntu:22.04
RUN mkdir /app
COPY app.py /app/
RUN cd /app && python3 app.py
# ↑ Must keep specifying /app

With WORKDIR:

FROM ubuntu:22.04
WORKDIR /app
# ↑ Set once

COPY app.py .
# . means current directory (/app)

RUN python3 app.py
# Already in /app directory

WORKDIR Benefits

Makes Dockerfile cleaner:

FROM python:3.11

# Set working directory
WORKDIR /app

# Now all commands run in /app
COPY requirements.txt .
RUN pip install -r requirements.txt

COPY . .

CMD ["python", "app.py"]
# Runs from /app directory automatically

WORKDIR creates directory if it doesn't exist:

FROM ubuntu:22.04
WORKDIR /app/data/logs
# ↑ Creates entire path automatically ✓

Part 6: Dockerfile Instructions - CMD and ENTRYPOINT

CMD - Default Command

CMD = Command to run when container starts

Syntax:

CMD ["executable", "param1", "param2"]

Examples:

# Run Python script
CMD ["python3", "app.py"]

# Start web server
CMD ["nginx", "-g", "daemon off;"]

# Run shell command
CMD ["echo", "Hello Docker!"]

Understanding CMD

CMD sets the default command:

FROM ubuntu:22.04
WORKDIR /app
COPY hello.py .
CMD ["python3", "hello.py"]

When you run container:

docker run myimage
# Automatically runs: python3 hello.py

Can be overridden:

docker run myimage echo "Different command"
# Runs: echo "Different command"
# (CMD is ignored)

CMD Formats

Three formats:

1. Exec form (recommended):

CMD ["python3", "app.py"]
# ↑ As JSON array

2. Shell form:

CMD python3 app.py
# ↑ As shell command

3. As parameters to ENTRYPOINT:

ENTRYPOINT ["python3"]
CMD ["app.py"]
# Together: python3 app.py

ENTRYPOINT - Fixed Command

ENTRYPOINT = Command that always runs (cannot be easily overridden)

Difference between CMD and ENTRYPOINT:

# Using CMD
CMD ["python3", "app.py"]

docker run myimage
# Runs: python3 app.py

docker run myimage ls
# Runs: ls (CMD ignored!)
# Using ENTRYPOINT
ENTRYPOINT ["python3", "app.py"]

docker run myimage
# Runs: python3 app.py

docker run myimage ls
# Runs: python3 app.py ls
#       ↑ Still runs ENTRYPOINT!

CMD + ENTRYPOINT Together

Powerful combination:

FROM python:3.11
WORKDIR /app
COPY app.py .

ENTRYPOINT ["python3"]
CMD ["app.py"]

Usage:

# Run default
docker run myimage
# Executes: python3 app.py

# Run different script
docker run myimage test.py
# Executes: python3 test.py
# ↑ ENTRYPOINT fixed, CMD replaced

Part 7: Dockerfile Instructions - ENV

ENV - Environment Variables

ENV = Set environment variables in the image

Syntax:

ENV KEY=VALUE

Examples:

# Set single variable
ENV APP_ENV=production

# Set multiple variables
ENV APP_ENV=production \
    DB_HOST=localhost \
    DB_PORT=5432

Using ENV

Example Dockerfile:

FROM python:3.11
WORKDIR /app

# Set environment variables
ENV PYTHONUNBUFFERED=1
ENV APP_PORT=8000
ENV DEBUG=False

COPY app.py .
CMD ["python", "app.py"]

In your Python code (app.py):

import os

port = os.getenv('APP_PORT')  # Gets 8000
debug = os.getenv('DEBUG')     # Gets False
print(f"Starting app on port {port}")

Override ENV at Runtime

You can override when running container:

docker run -e APP_PORT=9000 myimage
# APP_PORT is now 9000 (overrides Dockerfile ENV)

Part 8: Dockerfile Instructions - EXPOSE

EXPOSE - Document Which Ports are Used

EXPOSE = Tells Docker which ports the container listens on

Syntax:

EXPOSE port

Examples:

# Web server on port 80
EXPOSE 80

# Multiple ports
EXPOSE 80 443

# With protocol
EXPOSE 8080/tcp
EXPOSE 53/udp

Understanding EXPOSE

IMPORTANT: EXPOSE is just documentation!

EXPOSE does NOT:
├── Actually publish the port
├── Make port accessible from outside
└── Do port mapping

EXPOSE only:
└── Documents which ports app uses
└── Helps other developers understand

You still need -p when running:

docker run -p 8080:80 myimage
# ↑ This is what actually publishes the port

Example:

FROM nginx:latest
EXPOSE 80
# ↑ Documents: "nginx uses port 80"

Part 9: Creating Your First Dockerfile

Example 1: Simple Python Application

Let's create a real Dockerfile!

Step 1: Create project directory

mkdir my-python-app
cd my-python-app

Step 2: Create a simple Python app (app.py)

# app.py
print("Hello from Docker!")
print("This is my first containerized app!")

import time
while True:
    print("App is running...")
    time.sleep(5)

Step 3: Create Dockerfile

Create a file named Dockerfile (no extension):

# Use Python 3.11 as base image
FROM python:3.11-slim

# Set working directory
WORKDIR /app

# Copy Python script into container
COPY app.py .

# Run the application
CMD ["python", "app.py"]

Step 4: Build the image

docker build -t my-python-app .

Understanding the command:

docker build
    -t my-python-app    ← Tag (name) for the image
    .                   ← Build context (current directory)

What you'll see:

[+] Building 12.3s (8/8) FINISHED
 => [1/3] FROM python:3.11-slim
 => [2/3] WORKDIR /app
 => [3/3] COPY app.py .
 => exporting to image
 => => naming to my-python-app

Successfully built!

Step 5: Run your container

docker run my-python-app

Output:

Hello from Docker!
This is my first containerized app!
App is running...
App is running...
App is running...
...

🎉 Congratulations! You built your first Docker image!


Understanding the Build Process

What happened during docker build:

Step 1: Read Dockerfile
Step 2: FROM python:3.11-slim
        └── Download base image (if not present)

Step 3: WORKDIR /app
        └── Create /app directory in image

Step 4: COPY app.py .
        └── Copy your file into image

Step 5: CMD ["python", "app.py"]
        └── Set default command

Step 6: Create final image
        └── Tag it as "my-python-app"

Example 2: Python Web App with Dependencies

Let's create something more realistic!

Step 1: Create project structure

mkdir flask-app
cd flask-app

Step 2: Create app.py

# app.py
from flask import Flask

app = Flask(__name__)

@app.route('/')
def hello():
    return '<h1>Hello from Dockerized Flask!</h1>'

@app.route('/about')
def about():
    return '<h1>This is a Flask app running in Docker</h1>'

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

Step 3: Create requirements.txt

Flask==3.0.0

Step 4: Create Dockerfile

# Use Python 3.11 slim image
FROM python:3.11-slim

# Set working directory
WORKDIR /app

# Copy requirements file
COPY requirements.txt .

# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY app.py .

# Expose port 5000
EXPOSE 5000

# Run the application
CMD ["python", "app.py"]

Step 5: Build the image

docker build -t flask-app .

Step 6: Run the container

docker run -d -p 5000:5000 --name my-flask-app flask-app

Step 7: Test it

Open browser and go to:

  • http://localhost:5000
  • http://localhost:5000/about

You should see your Flask app running! 🎉


Understanding This Dockerfile

Let's break it down:

FROM python:3.11-slim
# Start with lightweight Python image

WORKDIR /app
# All subsequent commands run in /app

COPY requirements.txt .
# Copy dependencies list first
# Why first? For layer caching! (explained next)

RUN pip install --no-cache-dir -r requirements.txt
# Install Python packages
# --no-cache-dir = Don't save pip cache (smaller image)

COPY app.py .
# Copy application code

EXPOSE 5000
# Document that Flask uses port 5000

CMD ["python", "app.py"]
# Start Flask when container runs

Build Context and .dockerignore

Build Context = Files Docker can access during build

When you run:

docker build -t myapp .
#                      ↑
#                 Build context (current directory)

Docker sends all files in this directory to Docker daemon:

my-project/
├── app.py              ← Sent to Docker
├── requirements.txt    ← Sent to Docker
├── data.csv            ← Sent to Docker
├── old_backup.zip      ← Sent to Docker (unnecessary!)
└── node_modules/       ← Sent to Docker (huge, unnecessary!)

Using .dockerignore

Create .dockerignore file to exclude files:

# .dockerignore

# Python
__pycache__/
*.pyc
*.pyo
*.pyd
.Python
env/
venv/

# IDE
.vscode/
.idea/
*.swp
*.swo

# OS
.DS_Store
Thumbs.db

# Git
.git/
.gitignore

# Documentation
*.md
docs/

# Tests
tests/
*.test.py

# Data files (if not needed in image)
*.csv
*.xlsx
data/

Now build is faster:

Without .dockerignore:
└── Sends 500MB to Docker

With .dockerignore:
└── Sends 5MB to Docker ✓

Build time: Much faster! ✓

Part 10: Building Images - Best Practices

Best Practice 1: Layer Caching

Docker caches layers to speed up builds!

Bad example (slow rebuilds):

FROM python:3.11-slim
WORKDIR /app

# Copy everything
COPY . .

# Install dependencies
RUN pip install -r requirements.txt

CMD ["python", "app.py"]

Problem:

Change app.py:
        ↓
COPY . . changes (includes app.py)
        ↓
Cache invalidated
        ↓
Must re-run pip install (slow!) ✗

Good example (fast rebuilds):

FROM python:3.11-slim
WORKDIR /app

# Copy only requirements first
COPY requirements.txt .

# Install dependencies
RUN pip install -r requirements.txt

# Copy application code
COPY app.py .

CMD ["python", "app.py"]

Why better:

Change app.py:
        ↓
COPY requirements.txt . (unchanged)
        ↓
RUN pip install (cached! ✓)
        ↓
COPY app.py . (only this layer rebuilds)
        ↓
Fast rebuild! ✓

Rule: Copy files that change less frequently first!


Best Practice 2: Minimize Layers

Combine RUN commands:

Bad:

RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y vim
RUN apt-get install -y git
# 4 layers ✗

Good:

RUN apt-get update && apt-get install -y \
    curl \
    vim \
    git
# 1 layer ✓

Best Practice 3: Clean Up in Same Layer

Bad:

RUN apt-get update
RUN apt-get install -y curl
RUN rm -rf /var/lib/apt/lists/*
# Cleanup in separate layer doesn't reduce image size! ✗

Good:

RUN apt-get update && \
    apt-get install -y curl && \
    rm -rf /var/lib/apt/lists/*
# Cleanup in same layer reduces size! ✓

Best Practice 4: Use Specific Tags

Bad:

FROM python:latest
# What version? Changes over time! ✗

Good:

FROM python:3.11-slim
# Specific version, predictable! ✓

Best Practice 5: Use .dockerignore

Always create .dockerignore to exclude unnecessary files!


Best Practice 6: Multi-line for Readability

Bad:

RUN apt-get update && apt-get install -y curl vim git wget htop

Good:

RUN apt-get update && apt-get install -y \
    curl \
    vim \
    git \
    wget \
    htop
# Easier to read and modify ✓

Practice Exercises

Let's practice creating Dockerfiles!

Exercise 1: Node.js Application

Create a simple Node.js app:

app.js:

const http = require('http');

const server = http.createServer((req, res) => {
    res.writeHead(200, {'Content-Type': 'text/html'});
    res.end('<h1>Hello from Node.js in Docker!</h1>');
});

server.listen(3000, '0.0.0.0', () => {
    console.log('Server running on port 3000');
});

Your task: Create Dockerfile

Hints:

  • Use FROM node:18
  • WORKDIR /app
  • COPY app.js .
  • EXPOSE 3000
  • CMD ["node", "app.js"]

Build and run:

docker build -t node-app .
docker run -d -p 3000:3000 node-app

Test: http://localhost:3000


Exercise 2: Static Website with Nginx

Create index.html:

<!DOCTYPE html>
<html>
<head>
    <title>My Docker Site</title>
</head>
<body>
    <h1>Welcome to my Dockerized website!</h1>
    <p>This is served by Nginx running in a Docker container.</p>
</body>
</html>

Your task: Create Dockerfile

Hints:

  • Use FROM nginx:alpine
  • Copy index.html to /usr/share/nginx/html/
  • EXPOSE 80

Build and run:

docker build -t my-website .
docker run -d -p 8080:80 my-website

Test: http://localhost:8080


Summary

What We Learned:

✅ What a Dockerfile is
✅ Dockerfile instructions:
   ├── FROM (base image)
   ├── RUN (execute commands)
   ├── COPY (copy files)
   ├── WORKDIR (set directory)
   ├── CMD (default command)
   ├── ENTRYPOINT (fixed command)
   ├── ENV (environment variables)
   └── EXPOSE (document ports)
✅ Building images with docker build
✅ Layer caching
✅ Best practices
✅ .dockerignore
✅ Created real applications

Docker Compose

Docker Compose is one of the most powerful and useful Docker tools. Let's dive in! Part 1: What is Docker Compose? The Problem Docker Co...