Complete Beginner Guide: Deploy HTML/CSS/JS Website with Docker, Docker Hub, AWS & CI/CD

WHAT WE ARE BUILDING

You will write a simple website (HTML + CSS + JS), put it inside Docker, upload it to Docker Hub, host it on AWS, and set up automatic deployment so every time you change your code and push to GitHub, your website updates itself on AWS automatically.


TOOLS WE NEED

Before starting anything, install these tools on your Windows 11 machine.

1. VS Code (Code Editor) Go to https://code.visualstudio.com → Download for Windows → Install it. This is where you write your code.

2. Git Go to https://git-scm.com/download/win → Download → Install with all default options. This is needed even if you use GitHub Desktop.

3. GitHub Desktop (GUI Option) Go to https://desktop.github.com → Download → Install → Sign in with your GitHub account. This lets you push code without using any commands.

4. Docker Desktop Go to https://www.docker.com/products/docker-desktop → Download for Windows → Install. During installation make sure "Use WSL 2 instead of Hyper-V" is checked. After install, restart your PC. Open Docker Desktop and sign in with your Docker Hub account (create one at https://hub.docker.com if you don't have one).

5. Accounts You Need


PART 1 — CREATE YOUR PROJECT

Step 1: Create the Project Folder

Open File Explorer on your Windows 11. Go to C drive, then your Users folder, then your username folder. Create a new folder and name it "my-website". The full path will be something like C:\Users\YourName\my-website.

Step 2: Open the Folder in VS Code

Two ways to do this:

GUI Way: Right-click on the my-website folder → you will see "Open with Code" in the menu → click it.

CLI Way: Open the folder in File Explorer, then click on the address bar at the top, type "cmd" and press Enter. A black command prompt window opens. Then type:

code .

This opens VS Code in that folder.

Step 3: Create the Folder Structure

In VS Code, on the left side you see the Explorer panel. You will create folders and files there.

To create a folder: Click the "New Folder" icon (it looks like a folder with a plus sign) in the Explorer panel. To create a file: Click the "New File" icon (it looks like a paper with a plus sign).

Create this exact structure:

my-website/
├── src/
│   ├── index.html
│   ├── css/
│   │   └── style.css
│   └── js/
│       └── main.js
├── .github/
│   └── workflows/
│       └── cicd.yml
├── Dockerfile
├── nginx.conf
└── .dockerignore

How to create it step by step:

  • Click New Folder icon → type "src" → press Enter
  • Click on the src folder to select it → click New Folder icon → type "css" → press Enter
  • Click on the src folder again → click New Folder icon → type "js" → press Enter
  • Click New Folder icon at root level → type ".github" → press Enter
  • Click on .github folder → click New Folder icon → type "workflows" → press Enter
  • Now create files: click on src folder → New File → "index.html"
  • Click on css folder → New File → "style.css"
  • Click on js folder → New File → "main.js"
  • Click on workflows folder → New File → "cicd.yml"
  • Click on root (my-website) level → New File → "Dockerfile" (no extension, exactly this)
  • New File → "nginx.conf"
  • New File → ".dockerignore"

PART 2 — WRITE THE CODE

Step 4: Write index.html

Click on index.html in VS Code and paste this:

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0"/>
  <title>My Website</title>
  <link rel="stylesheet" href="css/style.css" />
</head>
<body>
  <div class="container">
    <h1>Hello World!</h1>
    <p id="message">My website is live on AWS with Docker and CI/CD!</p>
    <button id="btn">Click Me</button>
  </div>
  <script src="js/main.js"></script>
</body>
</html>

Notice that the CSS file is linked using href="css/style.css" and the JS file is linked at the bottom using src="js/main.js". This is how separate files connect to HTML.

Step 5: Write style.css

Click on style.css and paste this:

* {
  margin: 0;
  padding: 0;
  box-sizing: border-box;
}

body {
  font-family: Arial, sans-serif;
  background: linear-gradient(135deg, #1a1a2e, #16213e);
  color: white;
  display: flex;
  justify-content: center;
  align-items: center;
  min-height: 100vh;
}

.container {
  text-align: center;
  padding: 40px;
  background: rgba(255, 255, 255, 0.05);
  border-radius: 16px;
  border: 1px solid rgba(255, 255, 255, 0.1);
}

h1 {
  font-size: 2.5rem;
  margin-bottom: 16px;
  color: #e94560;
}

p {
  font-size: 1.1rem;
  margin-bottom: 24px;
  color: #a8b2d8;
}

button {
  padding: 12px 32px;
  background: #e94560;
  color: white;
  border: none;
  border-radius: 8px;
  font-size: 1rem;
  cursor: pointer;
}

button:hover {
  background: #c73652;
}

Step 6: Write main.js

Click on main.js and paste this:

document.getElementById('btn').addEventListener('click', function() {
  document.getElementById('message').textContent = 'JavaScript is working! CI/CD pipeline deployed this.';
  document.getElementById('btn').textContent = 'Clicked!';
  document.getElementById('btn').style.background = '#2ecc71';
});

Step 7: Write nginx.conf

Nginx is a web server that will serve your HTML files inside Docker. Click on nginx.conf and paste this:

server {
    listen 80;
    server_name localhost;
    root /usr/share/nginx/html;
    index index.html;

    location / {
        try_files $uri $uri/ /index.html;
    }
}

Step 8: Write Dockerfile

The Dockerfile tells Docker how to package your website. Click on Dockerfile and paste this:

# Use official nginx image based on Alpine Linux (small and fast)
FROM nginx:alpine

# Remove the default nginx config file
RUN rm /etc/nginx/conf.d/default.conf

# Copy our custom nginx config
COPY nginx.conf /etc/nginx/conf.d/default.conf

# Copy our website files into the nginx web root folder
COPY src/ /usr/share/nginx/html/

# Tell Docker this container uses port 80
EXPOSE 80

# Start nginx when container runs
CMD ["nginx", "-g", "daemon off;"]

Step 9: Write .dockerignore

This tells Docker to ignore certain files when building. Click on .dockerignore and paste this:

.git
.github
*.md

PART 3 — TEST LOCALLY WITH DOCKER

Step 10: Open Terminal in VS Code

In VS Code, press Ctrl + ` (backtick key, below Escape key). This opens a terminal at the bottom.

Make sure Docker Desktop is running (you should see the Docker whale icon in your taskbar).

Step 11: Build the Docker Image

In the terminal, type this command and press Enter:

docker build -t my-website:latest .

What this means: "docker build" means build an image, "-t my-website:latest" means name it "my-website" with tag "latest", and the dot "." means use the current folder.

You will see Docker downloading things and running steps. Wait for it to finish. You will see "Successfully built" at the end.

GUI Way to verify: Open Docker Desktop → click "Images" on the left sidebar → you will see "my-website" listed there.

Step 12: Run the Container Locally

docker run -d -p 8080:80 --name my-website-test my-website:latest

What this means: "-d" means run in background, "-p 8080:80" means map your computer's port 8080 to the container's port 80, "--name my-website-test" gives the container a name.

Now open your browser and go to: http://localhost:8080

You should see your website! If it works, your Docker setup is correct.

To stop the container:

CLI Way:

docker stop my-website-test
docker rm my-website-test

GUI Way: Open Docker Desktop → click "Containers" on the left → you will see "my-website-test" → click the Stop button (square icon) → then click the Delete button (trash icon).


PART 4 — PUSH TO GITHUB

Step 13: Initialize Git and Push to GitHub

GUI Way using GitHub Desktop:

  1. Open GitHub Desktop
  2. Click "Add an Existing Repository from your Hard Drive"
  3. Click "Choose" and browse to your my-website folder → click "Select Folder"
  4. It will say "This directory does not appear to be a Git repository" → click "create a repository" link
  5. Name: my-website
  6. Local path: already set to your folder
  7. Click "Create Repository"
  8. Now click "Publish repository" button at the top
  9. Uncheck "Keep this code private" if you want it public (you can keep private too)
  10. Click "Publish Repository"
  11. Your code is now on GitHub. You can verify by going to https://github.com/yourusername/my-website

CLI Way:

Open terminal in VS Code and run these commands one by one:

git init
git add .
git commit -m "Initial commit: add website and Docker files"

Now go to https://github.com, log in, click the "+" button at top right, click "New repository", name it "my-website", leave other settings default, click "Create repository".

GitHub will show you commands. Copy and run them. They look like this:

git remote add origin https://github.com/yourusername/my-website.git
git branch -M main
git push -u origin main

Replace "yourusername" with your actual GitHub username.


PART 5 — DOCKER HUB SETUP

Step 14: Create a Docker Hub Repository

  1. Go to https://hub.docker.com and log in
  2. Click the "Create Repository" button (blue button)
  3. Repository name: my-website
  4. Visibility: Public
  5. Click "Create"

Your Docker Hub repo address will be: hub.docker.com/r/yourusername/my-website

Step 15: Create a Docker Hub Access Token

This token is like a password that GitHub Actions will use to push images to Docker Hub. You never share your actual password with GitHub.

  1. On Docker Hub, click your profile photo (top right) → "Account Settings"
  2. Click "Security" in the left menu
  3. Click "New Access Token"
  4. Name it: github-actions
  5. Access permissions: Read, Write, Delete
  6. Click "Generate"
  7. A token will appear on screen — COPY IT NOW and save it in Notepad. You will never see this again after closing the window.

Step 16: Push Your Image to Docker Hub (Test)

In VS Code terminal:

docker tag my-website:latest yourusername/my-website:latest
docker push yourusername/my-website:latest

Before pushing, you may need to login in the terminal:

docker login

Enter your Docker Hub username and password.

GUI Way to push: Actually for pushing from your local machine, the terminal commands above are the simplest. Docker Desktop does not have a direct "push" button, so just use the 2 commands above.

After pushing, go to https://hub.docker.com/r/yourusername/my-website and you should see your image listed there with the "latest" tag.


PART 6 — AWS SETUP

Step 17: Create AWS Account

Go to https://aws.amazon.com and click "Create an AWS Account". Fill in your email, password, and account name. You will need a credit or debit card but will not be charged as long as you use free tier resources. Complete phone verification and sign in to the AWS Console at https://console.aws.amazon.com.

Step 18: Create an EC2 Instance (Your Server on AWS)

EC2 is basically a computer that runs 24/7 in the cloud. This is where your website will live.

GUI Way (AWS Console):

  1. In the AWS Console, look at the top search bar. Type "EC2" and click on EC2 from the results.

  2. You will see the EC2 Dashboard. Click the orange "Launch Instance" button.

  3. On the "Launch an instance" page, fill in these settings:

    Under "Name and tags":

    • Name: my-website-server

    Under "Application and OS Images":

    • Click on "Amazon Linux" (it should already be selected)
    • Make sure "Amazon Linux 2023 AMI" is selected
    • You should see "Free tier eligible" next to it

    Under "Instance type":

    • Select "t2.micro"
    • You should see "Free tier eligible" next to it

    Under "Key pair (login)":

    • Click "Create new key pair"
    • Key pair name: my-website-key
    • Key pair type: RSA
    • Private key file format: .pem
    • Click "Create key pair"
    • A file called "my-website-key.pem" will automatically download to your Downloads folder
    • MOVE THIS FILE to C:\Users\YourName.ssh\ folder. Create the .ssh folder if it doesn't exist. This file is very important — it's how you access your server. Do not lose it.

    Under "Network settings":

    • Click the "Edit" button on the right side
    • You will see one rule for SSH (port 22). In the "Source type" dropdown for this rule, select "My IP". This means only your computer can SSH into the server.
    • Click "Add security group rule" to add a second rule:
      • Type: HTTP
      • Protocol: TCP
      • Port range: 80
      • Source type: Anywhere (0.0.0.0/0)
      • This allows everyone to access your website on port 80
    • Click "Add security group rule" again for a third rule:
      • Type: HTTPS
      • Protocol: TCP
      • Port range: 443
      • Source type: Anywhere (0.0.0.0/0)

    Under "Configure storage":

    • Keep default 8 GB
  4. Click the orange "Launch instance" button on the right side summary panel.

  5. Click "View all instances".

  6. You will see your instance. Wait about 1-2 minutes until the "Instance state" column shows "Running" with a green dot. Refresh the page if needed.

  7. Click on your instance name to open its details. Look for "Public IPv4 address" on the right side. Write down this IP address. It looks something like 13.233.45.67. This is your server's address.

Step 19: Connect to Your EC2 Server and Install Docker

You need to go inside your server and install Docker on it.

GUI Way (using EC2 Instance Connect in browser):

  1. In AWS Console, go to EC2 → Instances → click on your instance
  2. Click the "Connect" button at the top
  3. Click on the "EC2 Instance Connect" tab
  4. Username should say "ec2-user" — leave it as is
  5. Click the orange "Connect" button
  6. A browser terminal window opens. You are now inside your server!

CLI Way (using terminal on Windows):

Open VS Code terminal and run:

ssh -i C:\Users\YourName\.ssh\my-website-key.pem ec2-user@YOUR_EC2_PUBLIC_IP

Replace YOUR_EC2_PUBLIC_IP with the IP you noted in step 18.

If you get a permission error on Windows, you need to fix the .pem file permissions. Right-click the .pem file → Properties → Security → Advanced → Disable inheritance → Remove all inherited permissions → Add → Select a principal → type your Windows username → Full control → OK.

Now whether you used GUI or CLI, you are in the server terminal. Run these commands:

sudo yum update -y

Wait for it to finish. This updates the server's software.

sudo yum install docker -y

This installs Docker on the server.

sudo systemctl start docker

This starts Docker.

sudo systemctl enable docker

This makes Docker start automatically if the server restarts.

sudo usermod -aG docker ec2-user

This allows ec2-user to run Docker commands without typing "sudo" each time.

docker --version

This confirms Docker is installed. You should see a version number.

You can close this terminal window. The server is ready.


PART 7 — ADD SECRETS TO GITHUB

Step 20: Add Repository Secrets

GitHub Actions (your CI/CD pipeline) needs some private information like your Docker Hub token and AWS server details. You store these as "secrets" in GitHub so they are not visible in your code.

  1. Go to https://github.com/yourusername/my-website
  2. Click the "Settings" tab (it's in the top navigation of your repo, not your account settings)
  3. In the left sidebar, scroll down and click "Secrets and variables"
  4. Click "Actions" under it
  5. You will see a "Repository secrets" section
  6. Click the "New repository secret" button

Add these 5 secrets one by one:

Secret 1:

  • Name: DOCKERHUB_USERNAME
  • Secret: your Docker Hub username (just the username, nothing else)
  • Click "Add secret"

Secret 2:

  • Name: DOCKERHUB_TOKEN
  • Secret: paste the token you copied and saved in Notepad from Step 15
  • Click "Add secret"

Secret 3:

  • Name: EC2_HOST
  • Secret: your EC2 public IP address (e.g., 13.233.45.67)
  • Click "Add secret"

Secret 4:

  • Name: EC2_USERNAME
  • Secret: ec2-user
  • Click "Add secret"

Secret 5:

  • Name: EC2_SSH_KEY
  • Secret: you need to paste the entire content of your .pem file here
  • To get the content: go to VS Code → File → Open File → browse to C:\Users\YourName.ssh\my-website-key.pem → open it → press Ctrl+A to select all → Ctrl+C to copy
  • Paste it as the secret value
  • Click "Add secret"

You should now see 5 secrets listed. These are encrypted and nobody can see them except GitHub Actions when it runs.


PART 8 — CREATE THE CI/CD PIPELINE

Step 21: Write the GitHub Actions Workflow

Click on cicd.yml in VS Code (it's in .github/workflows/ folder) and paste this:

name: CI/CD Pipeline

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:

  build-and-push:
    name: Build and Push Docker Image to Docker Hub
    runs-on: ubuntu-latest

    steps:
      - name: Step 1 - Checkout the code
        uses: actions/checkout@v4

      - name: Step 2 - Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Step 3 - Login to Docker Hub
        uses: docker/login-action@v3
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}

      - name: Step 4 - Build the Docker image and push it to Docker Hub
        uses: docker/build-push-action@v5
        with:
          context: .
          push: ${{ github.event_name != 'pull_request' }}
          tags: |
            ${{ secrets.DOCKERHUB_USERNAME }}/my-website:latest
            ${{ secrets.DOCKERHUB_USERNAME }}/my-website:${{ github.sha }}
          cache-from: type=gha
          cache-to: type=gha,mode=max

  deploy:
    name: Deploy to AWS EC2
    runs-on: ubuntu-latest
    needs: build-and-push
    if: github.ref == 'refs/heads/main' && github.event_name == 'push'

    steps:
      - name: Step 5 - SSH into EC2 and deploy
        uses: appleboy/ssh-action@v1.0.3
        with:
          host: ${{ secrets.EC2_HOST }}
          username: ${{ secrets.EC2_USERNAME }}
          key: ${{ secrets.EC2_SSH_KEY }}
          script: |
            echo "Pulling latest image from Docker Hub..."
            docker pull ${{ secrets.DOCKERHUB_USERNAME }}/my-website:latest

            echo "Stopping old container if it exists..."
            docker stop my-website || true
            docker rm my-website || true

            echo "Starting new container..."
            docker run -d \
              --name my-website \
              --restart unless-stopped \
              -p 80:80 \
              ${{ secrets.DOCKERHUB_USERNAME }}/my-website:latest

            echo "Removing old unused images..."
            docker image prune -f

            echo "Deployment done! Website is live."

Save the file with Ctrl+S.

Let me explain what this pipeline does in simple words. When you push code to the main branch on GitHub, GitHub Actions automatically starts two jobs. The first job called "build-and-push" picks up your code, builds a Docker image from it, and pushes that image to Docker Hub. The second job called "deploy" only runs after the first job succeeds. It connects to your AWS EC2 server via SSH, pulls the new Docker image from Docker Hub, stops the old container that was running, and starts a new container with the new image. Your website is then updated.


PART 9 — PUSH EVERYTHING AND GO LIVE

Step 22: Push All Files to GitHub

You now have all your files ready. Push everything to GitHub.

GUI Way using GitHub Desktop:

  1. Open GitHub Desktop
  2. On the left side you will see all your changed files listed
  3. At the bottom left, in the "Summary" field type: Add all project files with CI/CD pipeline
  4. Click "Commit to main"
  5. Click "Push origin" button at the top

CLI Way:

In VS Code terminal:

git add .
git commit -m "Add all project files with CI/CD pipeline"
git push origin main

Step 23: Watch the Pipeline Run

  1. Go to https://github.com/yourusername/my-website
  2. Click the "Actions" tab in the top navigation
  3. You will see a workflow run starting. It will have a yellow circle which means it is running.
  4. Click on the workflow run to open it
  5. You will see two jobs: "Build and Push Docker Image to Docker Hub" and "Deploy to AWS EC2"
  6. Click on any job to see its real-time logs
  7. The build job takes about 2-3 minutes
  8. The deploy job takes about 1 minute
  9. When both jobs show green checkmarks, your website is deployed

If any job fails, click on it and read the error message. Common issues are wrong secret values or wrong EC2 IP.

Step 24: See Your Live Website

Open your browser and type:

http://YOUR_EC2_PUBLIC_IP

For example: http://13.233.45.67

Your website is now live on the internet, running on AWS!


PART 10 — HOW TO UPDATE YOUR WEBSITE

This is the beauty of CI/CD. Every time you want to update your website, you just change your code and push. Everything else happens automatically.

For example, open src/index.html and change "Hello World!" to "Hello World! Updated!". Save the file.

GUI Way: Open GitHub Desktop → you see the changed file → write a commit message → Commit to main → Push origin.

CLI Way:

git add .
git commit -m "Update heading text"
git push origin main

Then go to GitHub Actions tab and watch the pipeline run. In about 3-4 minutes your change will be live on AWS automatically.


FINAL FOLDER STRUCTURE (What You Should Have)

my-website/
│
├── .github/
│   └── workflows/
│       └── cicd.yml         ← GitHub Actions pipeline
│
├── src/
│   ├── index.html           ← Main HTML file
│   ├── css/
│   │   └── style.css        ← All styling
│   └── js/
│       └── main.js          ← All JavaScript
│
├── Dockerfile               ← Instructions to build Docker image
├── nginx.conf               ← Nginx web server config
└── .dockerignore            ← Files to ignore when building Docker image

COMPLETE FLOW IN SIMPLE WORDS

You write code in VS Code on your Windows 11 laptop. You push the code to GitHub using GitHub Desktop (clicking) or git push (typing). GitHub sees new code on the main branch and automatically starts the CI/CD pipeline. The pipeline builds a Docker image of your website and uploads it to Docker Hub. Then the pipeline connects to your AWS server and tells it to download the new image and restart the website container. Your website is updated on the internet without you doing anything manually on the server.


TROUBLESHOOTING COMMON ISSUES

Docker Desktop not starting: Make sure virtualization is enabled in your BIOS. On Windows 11 it should be on by default. Also make sure WSL 2 is installed. Open PowerShell as Administrator and run: wsl --install

Permission denied when pushing to Docker Hub: Make sure you are logged in. Run "docker login" in terminal and enter your credentials.

GitHub Actions failing at deploy step: Double-check your EC2_SSH_KEY secret. Open your .pem file in VS Code, make sure you copied the ENTIRE content including the first line "-----BEGIN RSA PRIVATE KEY-----" and the last line "-----END RSA PRIVATE KEY-----".

Website not opening on EC2 IP: Check that port 80 is open in your EC2 security group. Go to AWS Console → EC2 → Security Groups → find your security group → check Inbound rules → HTTP port 80 should be there with source 0.0.0.0/0.

SSH connection refused when using CLI: Make sure your EC2 instance is in Running state. Also make sure the Security Group has port 22 open for your IP. Your IP might have changed if you are on a home network — go to Security Group rules and update the SSH source to "Anywhere" temporarily for testing, then change back to your IP.

Docker Compose

Docker Compose is one of the most powerful and useful Docker tools. Let's dive in!


Part 1: What is Docker Compose?

The Problem Docker Compose Solves

Imagine you built a multi-container application:

Your Application needs:
├── Web server (Nginx)
├── API server (Python Flask)
├── Database (PostgreSQL)
├── Cache (Redis)
└── Message Queue (RabbitMQ)

5 containers to manage!

Without Docker Compose:

# Create networks
docker network create frontend
docker network create backend

# Start database
docker run -d \
  --name postgres \
  --network backend \
  -e POSTGRES_PASSWORD=secret \
  -e POSTGRES_DB=myapp \
  -v postgres-data:/var/lib/postgresql/data \
  postgres:15

# Start Redis
docker run -d \
  --name redis \
  --network backend \
  redis:7

# Start API
docker run -d \
  --name api \
  --network backend \
  -e DATABASE_URL=postgresql://postgres:secret@postgres:5432/myapp \
  -e REDIS_URL=redis://redis:6379 \
  my-api

docker network connect frontend api

# Start Web
docker run -d \
  --name web \
  --network frontend \
  -p 80:80 \
  my-web

# That's a LOT of commands! 😰
# And you need to remember all of them!
# Starting, stopping, updating... nightmare!

Docker Compose Solution

With Docker Compose, ONE file describes everything:

# docker-compose.yml
version: '3.8'

services:
  postgres:
    image: postgres:15
    environment:
      POSTGRES_PASSWORD: secret
      POSTGRES_DB: myapp
    volumes:
      - postgres-data:/var/lib/postgresql/data

  redis:
    image: redis:7

  api:
    build: ./api
    environment:
      DATABASE_URL: postgresql://postgres:secret@postgres:5432/myapp
      REDIS_URL: redis://redis:6379
    depends_on:
      - postgres
      - redis

  web:
    build: ./web
    ports:
      - "80:80"
    depends_on:
      - api

volumes:
  postgres-data:

Now just run:

docker-compose up

That's it! All 5 containers started with proper configuration! ✓


What is Docker Compose?

Simple Definition:

Docker Compose = Tool for defining and running 
                 multi-container Docker applications

Key features:
├── Define everything in YAML file
├── Single command to start/stop all containers
├── Automatic network creation
├── Volume management
├── Service dependencies
└── Easy scaling

Think of it as:

Recipe Book (docker-compose.yml):
├── Lists all ingredients (services)
├── Preparation steps (configuration)
├── Cooking order (dependencies)
└── Final presentation (ports, networks)

One command to cook the entire meal! 🍽️

Part 2: Installing Docker Compose

Checking if Docker Compose is Installed

Docker Desktop includes Docker Compose!

docker-compose --version

Output:

Docker Compose version v2.24.5

✓ Already installed with Docker Desktop!


Docker Compose v1 vs v2

Two versions exist:

Docker Compose v1:
├── Separate tool
├── Command: docker-compose (with hyphen)
└── Older version

Docker Compose v2:
├── Integrated into Docker CLI
├── Command: docker compose (space, no hyphen)
└── Newer, faster version

Both work, but v2 is recommended:

# v1 syntax (old)
docker-compose up

# v2 syntax (new, recommended)
docker compose up

For this tutorial, we'll use v2 syntax (docker compose), but v1 also works!


Part 3: Docker Compose File Basics

Creating Your First docker-compose.yml

Docker Compose uses YAML format.

YAML Basics (Quick!):

# Comments start with #

# Key-value pairs
name: value

# Nested structure (indentation matters!)
parent:
  child: value
  another_child: value

# Lists
items:
  - item1
  - item2
  - item3

# Multi-line strings
description: |
  This is a
  multi-line
  string

⚠️ Important: YAML is VERY sensitive to indentation! Use spaces, not tabs!


Basic docker-compose.yml Structure

version: '3.8'  # Compose file version

services:       # Define containers
  service1:
    # Configuration for service1
  
  service2:
    # Configuration for service2

volumes:        # Define volumes (optional)
  volume1:

networks:       # Define networks (optional)
  network1:

Example 1: Single Service (Nginx)

docker-compose.yml:

version: '3.8'

services:
  web:
    image: nginx:alpine
    ports:
      - "8080:80"

That's it! Now run:

docker compose up

Output:

[+] Running 1/1
 ✔ Container project-web-1  Started
 
Attaching to web-1
web-1  | /docker-entrypoint.sh: Configuration complete
web-1  | nginx: [notice] starting nginx...

Open browser: http://localhost:8080

✓ Nginx running!

To stop:

# Press Ctrl+C

# Or in another terminal:
docker compose down

Understanding Service Names

In docker-compose.yml:

services:
  web:      # ← This is the service name

Docker Compose creates container with name:

project-web-1
  ↑     ↑   ↑
  │     │   └── Instance number
  │     └── Service name
  └── Project name (directory name)

Part 4: Service Configuration Options

Common Service Options

Let's explore all important options:


1. image - Use Existing Image

services:
  db:
    image: postgres:15
    # Uses official PostgreSQL image from Docker Hub

2. build - Build from Dockerfile

services:
  api:
    build: ./api
    # Builds from Dockerfile in ./api directory

Or with more options:

services:
  api:
    build:
      context: ./api        # Directory with Dockerfile
      dockerfile: Dockerfile.prod  # Custom Dockerfile name
      args:                 # Build arguments
        VERSION: 1.0

3. ports - Port Mapping

services:
  web:
    image: nginx
    ports:
      - "8080:80"       # Host:Container
      - "8443:443"

Format:

ports:
  - "HOST_PORT:CONTAINER_PORT"

4. environment - Environment Variables

services:
  db:
    image: postgres:15
    environment:
      POSTGRES_PASSWORD: secret
      POSTGRES_DB: myapp
      POSTGRES_USER: admin

Or from file:

services:
  api:
    image: my-api
    env_file:
      - .env        # Load from .env file

.env file:

DATABASE_URL=postgresql://localhost/mydb
API_KEY=abc123
DEBUG=true

5. volumes - Data Persistence

Named volume:

services:
  db:
    image: postgres:15
    volumes:
      - postgres-data:/var/lib/postgresql/data

volumes:
  postgres-data:    # Define volume

Bind mount:

services:
  web:
    image: nginx
    volumes:
      - ./html:/usr/share/nginx/html    # Host:Container

Multiple volumes:

services:
  app:
    image: my-app
    volumes:
      - app-data:/data          # Named volume
      - ./config:/app/config    # Bind mount
      - ./logs:/app/logs        # Another bind mount

6. depends_on - Service Dependencies

services:
  web:
    image: nginx
    depends_on:
      - api         # Start api before web

  api:
    image: my-api
    depends_on:
      - db          # Start db before api

  db:
    image: postgres

Start order: db → api → web

⚠️ Note: depends_on only waits for container to START, not for it to be READY!


7. networks - Custom Networks

services:
  web:
    image: nginx
    networks:
      - frontend

  api:
    image: my-api
    networks:
      - frontend
      - backend

  db:
    image: postgres
    networks:
      - backend

networks:
  frontend:
  backend:

8. restart - Restart Policy

services:
  api:
    image: my-api
    restart: always
    # Options: no, always, on-failure, unless-stopped

Options:

no              = Never restart
always          = Always restart (even after reboot)
on-failure      = Restart only if exit code != 0
unless-stopped  = Always restart unless manually stopped

9. command - Override Default Command

services:
  db:
    image: postgres
    command: postgres -c max_connections=200
    # Overrides default command

10. container_name - Custom Container Name

services:
  db:
    image: postgres
    container_name: my-postgres-db
    # Instead of default: project-db-1

Part 5: Complete Example - Web Application

Building a Full Application

Let's create: Web Frontend + API Backend + PostgreSQL Database


Project Structure

my-app/
├── docker-compose.yml
├── web/
│   ├── Dockerfile
│   └── index.html
├── api/
│   ├── Dockerfile
│   ├── app.py
│   └── requirements.txt
└── .env

Step 1: Create Project Directory

mkdir my-app
cd my-app
mkdir web api

Step 2: Create API

api/app.py:

from flask import Flask, jsonify
import psycopg2
import os
import time

app = Flask(__name__)

def get_db():
    # Wait for database to be ready
    max_retries = 30
    for i in range(max_retries):
        try:
            conn = psycopg2.connect(
                host=os.getenv('DB_HOST', 'db'),
                database=os.getenv('DB_NAME', 'myapp'),
                user=os.getenv('DB_USER', 'postgres'),
                password=os.getenv('DB_PASSWORD', 'secret')
            )
            return conn
        except psycopg2.OperationalError:
            if i < max_retries - 1:
                time.sleep(1)
            else:
                raise

@app.route('/api/status')
def status():
    return jsonify({
        'status': 'ok',
        'message': 'API is running!'
    })

@app.route('/api/db-check')
def db_check():
    try:
        db = get_db()
        cursor = db.cursor()
        cursor.execute('SELECT version()')
        version = cursor.fetchone()[0]
        db.close()
        return jsonify({
            'status': 'ok',
            'database': version
        })
    except Exception as e:
        return jsonify({
            'status': 'error',
            'message': str(e)
        }), 500

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

api/requirements.txt:

flask==3.0.0
psycopg2-binary==2.9.9

api/Dockerfile:

FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY app.py .

CMD ["python", "app.py"]

Step 3: Create Web Frontend

web/index.html:

<!DOCTYPE html>
<html>
<head>
    <title>My Docker Compose App</title>
    <style>
        body {
            font-family: Arial, sans-serif;
            max-width: 800px;
            margin: 50px auto;
            padding: 20px;
            background-color: #f5f5f5;
        }
        .container {
            background: white;
            padding: 30px;
            border-radius: 10px;
            box-shadow: 0 2px 10px rgba(0,0,0,0.1);
        }
        button {
            background: #007bff;
            color: white;
            border: none;
            padding: 10px 20px;
            border-radius: 5px;
            cursor: pointer;
            margin: 5px;
        }
        button:hover {
            background: #0056b3;
        }
        #result {
            background: #f8f9fa;
            padding: 15px;
            border-radius: 5px;
            margin-top: 20px;
            white-space: pre-wrap;
        }
    </style>
</head>
<body>
    <div class="container">
        <h1>🐳 Docker Compose Demo App</h1>
        <p>This demonstrates a multi-container application with Docker Compose!</p>
        
        <div>
            <button onclick="checkAPI()">Check API Status</button>
            <button onclick="checkDB()">Check Database</button>
        </div>
        
        <div id="result"></div>
    </div>
    
    <script>
        async function checkAPI() {
            const result = document.getElementById('result');
            result.textContent = 'Loading...';
            
            try {
                const response = await fetch('/api/status');
                const data = await response.json();
                result.textContent = JSON.stringify(data, null, 2);
            } catch (error) {
                result.textContent = 'Error: ' + error.message;
            }
        }
        
        async function checkDB() {
            const result = document.getElementById('result');
            result.textContent = 'Loading...';
            
            try {
                const response = await fetch('/api/db-check');
                const data = await response.json();
                result.textContent = JSON.stringify(data, null, 2);
            } catch (error) {
                result.textContent = 'Error: ' + error.message;
            }
        }
    </script>
</body>
</html>

web/nginx.conf:

server {
    listen 80;
    
    location / {
        root /usr/share/nginx/html;
        index index.html;
    }
    
    location /api/ {
        proxy_pass http://api:5000/api/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

web/Dockerfile:

FROM nginx:alpine

COPY index.html /usr/share/nginx/html/
COPY nginx.conf /etc/nginx/conf.d/default.conf

Step 4: Create docker-compose.yml

docker-compose.yml:

version: '3.8'

services:
  # PostgreSQL Database
  db:
    image: postgres:15-alpine
    container_name: myapp-postgres
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: secret
    volumes:
      - postgres-data:/var/lib/postgresql/data
    networks:
      - backend
    restart: unless-stopped
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5

  # API Backend
  api:
    build: ./api
    container_name: myapp-api
    environment:
      DB_HOST: db
      DB_NAME: myapp
      DB_USER: postgres
      DB_PASSWORD: secret
    depends_on:
      db:
        condition: service_healthy
    networks:
      - frontend
      - backend
    restart: unless-stopped

  # Web Frontend
  web:
    build: ./web
    container_name: myapp-web
    ports:
      - "8080:80"
    depends_on:
      - api
    networks:
      - frontend
    restart: unless-stopped

volumes:
  postgres-data:

networks:
  frontend:
  backend:

Step 5: Run the Application

Start everything:

docker compose up

Or run in background:

docker compose up -d

Output:

[+] Running 5/5
 ✔ Network myapp_frontend        Created
 ✔ Network myapp_backend         Created
 ✔ Volume "myapp_postgres-data"  Created
 ✔ Container myapp-postgres      Started
 ✔ Container myapp-api           Started
 ✔ Container myapp-web           Started

Open browser: http://localhost:8080

Click buttons to test! ✓


Part 6: Docker Compose Commands

Essential Commands

Start services:

# Start in foreground (see logs)
docker compose up

# Start in background (detached)
docker compose up -d

# Rebuild images and start
docker compose up --build

# Start specific service
docker compose up web

Stop services:

# Stop (keeps containers)
docker compose stop

# Stop and remove containers
docker compose down

# Stop, remove containers, volumes, and networks
docker compose down -v

# Remove everything including images
docker compose down --rmi all

View logs:

# All services
docker compose logs

# Follow logs (real-time)
docker compose logs -f

# Specific service
docker compose logs api

# Last 100 lines
docker compose logs --tail=100

List services:

docker compose ps

Output:

NAME                IMAGE           STATUS    PORTS
myapp-web           myapp-web       Up        0.0.0.0:8080->80/tcp
myapp-api           myapp-api       Up
myapp-postgres      postgres:15     Up

Execute commands in service:

# Open shell in service
docker compose exec api bash

# Run command
docker compose exec db psql -U postgres

# Run as different user
docker compose exec -u root api bash

View service configuration:

docker compose config

Shows resolved configuration with all variables substituted.


Restart services:

# Restart all
docker compose restart

# Restart specific service
docker compose restart api

Scale services:

# Run 3 instances of api
docker compose up -d --scale api=3

Build images:

# Build all images
docker compose build

# Build specific service
docker compose build api

# Build without cache
docker compose build --no-cache

Pull images:

# Pull all images
docker compose pull

# Pull specific service
docker compose pull db

Part 7: Environment Variables and .env Files

Using .env File

Create .env file:

# .env
DB_NAME=myapp
DB_USER=postgres
DB_PASSWORD=supersecret
API_PORT=5000
WEB_PORT=8080

docker-compose.yml:

version: '3.8'

services:
  db:
    image: postgres:15
    environment:
      POSTGRES_DB: ${DB_NAME}
      POSTGRES_USER: ${DB_USER}
      POSTGRES_PASSWORD: ${DB_PASSWORD}

  api:
    build: ./api
    environment:
      DATABASE_URL: postgresql://${DB_USER}:${DB_PASSWORD}@db:5432/${DB_NAME}
    ports:
      - "${API_PORT}:5000"

  web:
    build: ./web
    ports:
      - "${WEB_PORT}:80"

Variables automatically loaded from .env! ✓


Multiple Environment Files

# Use different env file
docker compose --env-file .env.production up

# Override with another file
docker compose --env-file .env.local up

Passing Environment Variables

# From command line
DB_PASSWORD=newsecret docker compose up

# System environment variables
export DB_PASSWORD=newsecret
docker compose up

Part 8: Profiles (Conditional Services)

What are Profiles?

Run different sets of services for different scenarios.

Example:

version: '3.8'

services:
  # Always run
  web:
    image: nginx
    ports:
      - "80:80"

  api:
    image: my-api
    depends_on:
      - db

  db:
    image: postgres

  # Only for development
  adminer:
    image: adminer
    profiles:
      - dev
    ports:
      - "8080:8080"

  # Only for debugging
  debug-tools:
    image: nicolaka/netshoot
    profiles:
      - debug
    command: sleep infinity

Usage:

# Start only core services
docker compose up

# Start with dev profile (includes adminer)
docker compose --profile dev up

# Start with debug profile
docker compose --profile debug up

# Start with multiple profiles
docker compose --profile dev --profile debug up

Part 9: Healthchecks

Adding Healthchecks

Healthcheck = Test if service is actually ready

services:
  db:
    image: postgres:15
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s      # Check every 10 seconds
      timeout: 5s        # Fail if takes > 5 seconds
      retries: 3         # Try 3 times before giving up
      start_period: 30s  # Grace period on startup

  api:
    build: ./api
    depends_on:
      db:
        condition: service_healthy  # Wait for db to be healthy!
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:5000/health"]
      interval: 30s
      timeout: 10s
      retries: 3

Benefits:

Without healthcheck:
├── depends_on waits for container to start
├── But container might not be ready yet!
└── API tries to connect to DB → Fails! ✗

With healthcheck:
├── depends_on waits for service to be HEALTHY
├── Container started AND ready
└── API connects successfully ✓

Part 10: Advanced Example - Full Stack Application

Complete Real-World Example

docker-compose.yml:

version: '3.8'

services:
  # Nginx Reverse Proxy
  nginx:
    image: nginx:alpine
    container_name: nginx-proxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/ssl:/etc/nginx/ssl:ro
    depends_on:
      - web
      - api
    networks:
      - frontend
    restart: always

  # Frontend (React)
  web:
    build:
      context: ./frontend
      args:
        NODE_ENV: production
    container_name: react-app
    environment:
      - REACT_APP_API_URL=http://localhost/api
    networks:
      - frontend
    restart: always

  # Backend API (Node.js)
  api:
    build: ./backend
    container_name: nodejs-api
    environment:
      NODE_ENV: production
      DB_HOST: postgres
      DB_PORT: 5432
      DB_NAME: ${DB_NAME}
      DB_USER: ${DB_USER}
      DB_PASSWORD: ${DB_PASSWORD}
      REDIS_HOST: redis
      REDIS_PORT: 6379
      JWT_SECRET: ${JWT_SECRET}
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_started
    networks:
      - frontend
      - backend
    restart: always
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3

  # PostgreSQL Database
  postgres:
    image: postgres:15-alpine
    container_name: postgres-db
    environment:
      POSTGRES_DB: ${DB_NAME}
      POSTGRES_USER: ${DB_USER}
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    volumes:
      - postgres-data:/var/lib/postgresql/data
      - ./database/init.sql:/docker-entrypoint-initdb.d/init.sql:ro
    networks:
      - backend
    restart: always
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${DB_USER}"]
      interval: 10s
      timeout: 5s
      retries: 5

  # Redis Cache
  redis:
    image: redis:7-alpine
    container_name: redis-cache
    command: redis-server --appendonly yes
    volumes:
      - redis-data:/data
    networks:
      - backend
    restart: always

  # Database Admin (Development only)
  adminer:
    image: adminer
    container_name: db-admin
    ports:
      - "8080:8080"
    depends_on:
      - postgres
    networks:
      - backend
    profiles:
      - dev
    restart: unless-stopped

volumes:
  postgres-data:
    driver: local
  redis-data:
    driver: local

networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge

.env:

# Database
DB_NAME=myapp
DB_USER=appuser
DB_PASSWORD=strongpassword123

# JWT
JWT_SECRET=your-secret-key-change-in-production

# Node
NODE_ENV=production

Usage:

# Production (no adminer)
docker compose up -d

# Development (with adminer)
docker compose --profile dev up -d

# View logs
docker compose logs -f

# Stop
docker compose down

Part 11: Docker Compose Best Practices

1. Use Specific Image Tags

Bad:

services:
  db:
    image: postgres  # Latest version, unpredictable!

Good:

services:
  db:
    image: postgres:15-alpine  # Specific version

2. Use .env for Sensitive Data

Bad:

services:
  db:
    environment:
      POSTGRES_PASSWORD: hardcoded-password  # Never do this!

Good:

services:
  db:
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}  # From .env file

Add .env to .gitignore!


3. Use Named Volumes

Bad:

volumes:
  - ./data:/var/lib/postgresql/data  # Bind mount

Good:

volumes:
  - postgres-data:/var/lib/postgresql/data  # Named volume

volumes:
  postgres-data:

4. Add Healthchecks

services:
  api:
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3

5. Use Restart Policies

services:
  web:
    restart: unless-stopped  # Auto-restart on failure

6. Separate Networks

networks:
  frontend:  # Public-facing services
  backend:   # Internal services (database, cache)

7. Order Services Properly

services:
  db:        # Database first
  api:       # API depends on db
    depends_on:
      - db
  web:       # Web depends on api
    depends_on:
      - api

Summary

What We Learned:

✅ What Docker Compose is and why it's useful
✅ docker-compose.yml file structure
✅ Service configuration options
✅ Building multi-container applications
✅ Docker Compose commands
✅ Environment variables and .env files
✅ Profiles for different scenarios
✅ Healthchecks
✅ Networks and volumes in Compose
✅ Real-world examples
✅ Best practices

Key Takeaways:

1. Docker Compose = Multi-container management tool
2. One YAML file describes entire application
3. Single command to start/stop everything
4. Automatic networking between services
5. Perfect for development and simple deployments
6. Use service names for container communication
7. Always use .env for sensitive data
8. Add healthchecks for reliable startups

Common Commands:

docker compose up -d          # Start in background
docker compose down           # Stop and remove
docker compose logs -f        # Follow logs
docker compose ps             # List services
docker compose exec api bash  # Access service shell
docker compose build          # Rebuild images
docker compose restart        # Restart services

🎉 Excellent! You now know Docker Compose!

You can now:

  • Manage multi-container applications easily
  • Define entire stacks in one file
  • Use Docker Compose for development
  • Deploy simple production applications

Docker Networking

Now let's learn how containers communicate with each other and the outside world.


Part 1: Understanding Container Networking Basics

The Networking Challenge

Simple Question: How do containers talk to each other?

Scenario:

You have:
├── Web application container (needs to talk to database)
├── Database container (needs to receive connections)
└── Redis cache container (needs to be accessed)

How do they communicate? 🤔

Container Isolation

Remember: Containers are ISOLATED

By default:
├── Each container has its own network stack
├── Own IP address
├── Own network interface
├── Cannot see other containers
└── Like separate computers on a network

Visual:

┌─────────────────┐  ┌─────────────────┐  ┌─────────────────┐
│  Container 1    │  │  Container 2    │  │  Container 3    │
│                 │  │                 │  │                 │
│  IP: 172.17.0.2 │  │  IP: 172.17.0.3 │  │  IP: 172.17.0.4 │
│                 │  │                 │  │                 │
└─────────────────┘  └─────────────────┘  └─────────────────┘
        ↑                    ↑                    ↑
        └────────────────────┴────────────────────┘
                    Docker Network

How Containers Access the Outside World

Your Computer (Host):

Your Computer:
├── IP: 192.168.1.100 (on your home network)
├── Can access internet
└── Runs Docker

Container inside:
├── Has own IP: 172.17.0.2
├── Can access internet through host
└── Uses NAT (Network Address Translation)

Visual:

Internet
   ↕
Your Computer (192.168.1.100)
   ↕
Docker Network (172.17.0.0/16)
   ↕
Containers (172.17.0.2, 172.17.0.3, ...)

Part 2: Default Bridge Network

What is the Bridge Network?

Bridge Network = Default network Docker creates

When you run a container without specifying network:

docker run nginx

It automatically connects to the "bridge" network.


Viewing Networks

List all networks:

docker network ls

Output:

NETWORK ID     NAME      DRIVER    SCOPE
a1b2c3d4e5f6   bridge    bridge    local
f6e5d4c3b2a1   host      host      local
1a2b3c4d5e6f   none      null      local

Three default networks:

bridge:
├── Default network
├── Containers can communicate
└── Most commonly used

host:
├── Container uses host's network
├── No isolation
└── Advanced use case

none:
├── No network
└── Completely isolated

Inspecting the Bridge Network

docker network inspect bridge

Output (simplified):

[
    {
        "Name": "bridge",
        "Driver": "bridge",
        "Scope": "local",
        "IPAM": {
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Containers": {
            "abc123...": {
                "Name": "container1",
                "IPv4Address": "172.17.0.2/16"
            }
        }
    }
]

Key Information:

Subnet: 172.17.0.0/16
└── IP range for containers

Gateway: 172.17.0.1
└── Docker's network gateway

Containers:
└── Lists all containers on this network

Testing Default Network

Run two containers:

# Container 1
docker run -d --name web1 nginx

# Container 2
docker run -d --name web2 nginx

Check their IPs:

docker inspect web1 | findstr IPAddress

Output:

"IPAddress": "172.17.0.2"
docker inspect web2 | findstr IPAddress

Output:

"IPAddress": "172.17.0.3"

Containers have different IPs on same network! ✓


Trying to Communicate (Default Bridge)

Access web1 from web2:

docker exec -it web2 bash

Inside web2 container:

# Try to ping web1 by IP
apt-get update && apt-get install -y iputils-ping
ping 172.17.0.2

# Output:
# PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
# 64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.073 ms
# ✓ Can reach by IP!

# Try to ping by name
ping web1

# Output:
# ping: web1: Name or service not known
# ✗ Cannot reach by name!

Important Discovery:

Default bridge network:
✓ Containers CAN communicate by IP address
✗ Containers CANNOT communicate by name
└── Must use IP addresses (not convenient!)

Part 3: Custom Bridge Networks

Why Create Custom Networks?

Custom networks provide:

✓ Automatic DNS resolution (use container names!)
✓ Better isolation
✓ More control
✓ Can create multiple networks
└── Best practice for multi-container apps

Creating a Custom Network

Syntax:

docker network create NETWORK_NAME

Example:

docker network create my-network

Output:

a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0
↑
Network ID

Verify:

docker network ls

Output:

NETWORK ID     NAME         DRIVER    SCOPE
a1b2c3d4e5f6   bridge       bridge    local
b2c3d4e5f6a7   my-network   bridge    local  ← Your new network!
f6e5d4c3b2a1   host         host      local
1a2b3c4d5e6f   none         null      local

Using Custom Network

Run containers on custom network:

# Container 1
docker run -d --name app1 --network my-network nginx

# Container 2
docker run -d --name app2 --network my-network nginx

Now test communication:

docker exec -it app2 bash

Inside app2:

# Install curl
apt-get update && apt-get install -y curl

# Access app1 by NAME!
curl http://app1

# Output:
# <!DOCTYPE html>
# <html>
# <head>
# <title>Welcome to nginx!</title>
# ...
# ✓ Works! Can use container name!

# Also try by IP
curl http://172.18.0.2

# ✓ Also works!

Magic! 🎉

Custom network provides:
✓ DNS resolution (container name → IP)
✓ No need to know IP addresses
✓ Use friendly names
└── Much easier to work with!

Real-World Example: Web App + Database

Scenario: Flask app needs to connect to MySQL

Step 1: Create custom network

docker network create app-network

Step 2: Run MySQL container

docker run -d \
  --name mysql-db \
  --network app-network \
  -e MYSQL_ROOT_PASSWORD=secret \
  -e MYSQL_DATABASE=myapp \
  mysql:8.0

Step 3: Create Python app

app.py:

import mysql.connector
import time

# Wait for MySQL to be ready
time.sleep(10)

# Connect using container name!
db = mysql.connector.connect(
    host="mysql-db",  # ← Container name!
    user="root",
    password="secret",
    database="myapp"
)

cursor = db.cursor()
cursor.execute("CREATE TABLE IF NOT EXISTS users (id INT, name VARCHAR(50))")
cursor.execute("INSERT INTO users VALUES (1, 'Alice')")
db.commit()

cursor.execute("SELECT * FROM users")
for row in cursor:
    print(f"User: {row[1]}")

db.close()

requirements.txt:

mysql-connector-python

Dockerfile:

FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY app.py .
CMD ["python", "app.py"]

Step 4: Build and run app

docker build -t my-app .

docker run --network app-network my-app

Output:

User: Alice

✓ App connected to database using container name!


Part 4: Port Publishing (Port Mapping)

Understanding Port Publishing

Problem:

Container running web server on port 80
├── Port 80 INSIDE container
├── Your computer can't access it
└── External users can't access it

Solution: Port Publishing

Map container port to host port:
Container port 80 → Host port 8080

Now:
├── Access localhost:8080 on your computer
│       ↓
├── Traffic goes to container port 80
└── Web server accessible! ✓

Port Publishing Syntax

Syntax:

docker run -p HOST_PORT:CONTAINER_PORT IMAGE

Examples:

# Map port 8080 to 80
docker run -p 8080:80 nginx

# Map port 3000 to 3000
docker run -p 3000:3000 node-app

# Map port 5432 to 5432
docker run -p 5432:5432 postgres

Multiple Port Mappings

docker run -p 8080:80 -p 8443:443 nginx
#          ↑            ↑
#     HTTP port    HTTPS port

Viewing Port Mappings

docker ps

Output:

CONTAINER ID   IMAGE   PORTS                                   NAMES
abc123def456   nginx   0.0.0.0:8080->80/tcp                   web
                       ↑       ↑    ↑  ↑
                       │       │    │  └── Protocol
                       │       │    └── Container port
                       │       └── Host port
                       └── Listen on all interfaces

Port Binding to Specific Interface

Bind to all interfaces (default):

docker run -p 8080:80 nginx
# Accessible from anywhere

Bind to localhost only:

docker run -p 127.0.0.1:8080:80 nginx
# Only accessible from this computer
# Not accessible from network

Automatic Port Assignment

Let Docker choose the port:

docker run -P nginx
#          ↑
#     Capital P

Docker assigns random port:

docker ps

Output:

PORTS
0.0.0.0:32768->80/tcp
        ↑
   Random port assigned

Access: http://localhost:32768


Part 5: Network Types in Detail

1. Bridge Network (Default)

What it is:

Default network type
├── Software bridge
├── Containers on same bridge can communicate
└── Most common type

When to use:

✓ Single host
✓ Multiple containers need to communicate
✓ Standard web applications

Example:

docker network create --driver bridge my-bridge
docker run --network my-bridge nginx

2. Host Network

What it is:

Container uses host's network directly
├── No network isolation
├── Container shares host's IP
└── Better performance (no NAT)

Example:

docker run --network host nginx

What happens:

Container:
├── Uses host's IP address
├── Port 80 on container = Port 80 on host
├── No port mapping needed
└── Cannot run multiple containers on same port

When to use:

✓ Need maximum network performance
✓ Network debugging
✗ Less isolation (security concern)

3. None Network

What it is:

No network at all
├── Completely isolated
├── No internet access
└── No container communication

Example:

docker run --network none nginx

When to use:

✓ Maximum isolation
✓ Security-critical containers
✓ Batch processing (no network needed)

4. Overlay Network (Advanced)

What it is:

Connects containers across multiple Docker hosts
├── For Docker Swarm
├── Multi-host networking
└── Advanced orchestration

Example:

docker network create --driver overlay my-overlay

When to use:

✓ Docker Swarm mode
✓ Multiple servers
✓ Distributed applications

Part 6: Connecting Containers to Multiple Networks

Container on Multiple Networks

A container can be on multiple networks!

Example:

# Create two networks
docker network create frontend
docker network create backend

# Run database on backend only
docker run -d --name db --network backend mysql

# Run API on both networks
docker run -d --name api --network backend nginx
docker network connect frontend api

# Run web on frontend only
docker run -d --name web --network frontend nginx

Result:

frontend network:
├── api ✓
└── web ✓

backend network:
├── api ✓
└── db ✓

Communication:
├── web → api (via frontend) ✓
├── api → db (via backend) ✓
├── web → db ✗ (not on same network)
└── Isolation achieved! ✓

Connecting/Disconnecting Networks

Connect container to network:

docker network connect NETWORK_NAME CONTAINER_NAME

Disconnect container from network:

docker network disconnect NETWORK_NAME CONTAINER_NAME

Example:

# Connect web to backend
docker network connect backend web

# Now web can access db!

# Disconnect web from backend
docker network disconnect backend web

# web can no longer access db

Part 7: DNS and Service Discovery

Automatic DNS Resolution

Custom networks provide automatic DNS:

Container names = Hostnames

my-app container:
├── Can be reached at: my-app
├── Can be reached at: my-app.my-network
└── Automatic DNS resolution

Testing DNS Resolution

Run containers:

docker network create test-net
docker run -d --name server1 --network test-net nginx
docker run -it --name client --network test-net ubuntu bash

Inside client:

# Install tools
apt-get update && apt-get install -y dnsutils curl

# Test DNS resolution
nslookup server1

# Output:
# Server:         127.0.0.11
# Address:        127.0.0.11#53
# 
# Name:   server1
# Address: 172.18.0.2

# ✓ Container name resolved to IP!

# Access server
curl http://server1
# ✓ Works!

Network Aliases

Give containers additional names:

docker run -d \
  --name mysql-db \
  --network app-net \
  --network-alias database \
  --network-alias db \
  mysql

Now can access as:

mysql-db   (container name)
database   (alias)
db         (alias)

Useful for:

✓ Backward compatibility
✓ Multiple names for same service
✓ Clearer naming

Part 8: Network Isolation Patterns

Pattern 1: Multi-Tier Application

Structure:

┌─────────────────────────────────────┐
│         frontend network             │
│  ┌──────────┐      ┌──────────┐    │
│  │   Web    │──────│   API    │    │
│  └──────────┘      └──────────┘    │
└─────────────────────────┬───────────┘
                          │
┌─────────────────────────┴───────────┐
│         backend network              │
│  ┌──────────┐      ┌──────────┐    │
│  │   API    │──────│ Database │    │
│  └──────────┘      └──────────┘    │
└─────────────────────────────────────┘

Isolation:
├── Web can only talk to API
├── API can talk to both
└── Database is hidden from Web

Implementation:

# Create networks
docker network create frontend
docker network create backend

# Database (backend only)
docker run -d --name db --network backend postgres

# API (both networks)
docker run -d --name api --network backend my-api
docker network connect frontend api

# Web (frontend only)
docker run -d --name web --network frontend -p 80:80 nginx

Pattern 2: Microservices Isolation

Each service on own network:

┌────────────┐  ┌────────────┐  ┌────────────┐
│  Service A │  │  Service B │  │  Service C │
│  Network A │  │  Network B │  │  Network C │
└────────────┘  └────────────┘  └────────────┘
       ↓               ↓               ↓
   ┌────────────────────────────────────┐
   │       API Gateway Network          │
   │         (Common Network)           │
   └────────────────────────────────────┘

Part 9: Practical Multi-Container Application

Building a Complete App

Let's build: Web App + API + Database

Step 1: Create networks

docker network create frontend
docker network create backend

Step 2: Run PostgreSQL

docker run -d \
  --name postgres \
  --network backend \
  -e POSTGRES_PASSWORD=secret \
  -e POSTGRES_DB=myapp \
  postgres:15

Step 3: Create API (api.py)

from flask import Flask, jsonify
import psycopg2
import os

app = Flask(__name__)

def get_db():
    return psycopg2.connect(
        host="postgres",  # Container name!
        database="myapp",
        user="postgres",
        password="secret"
    )

@app.route('/api/users')
def get_users():
    db = get_db()
    cursor = db.cursor()
    cursor.execute("SELECT version()")
    version = cursor.fetchone()
    return jsonify({"database": version[0]})

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

requirements.txt:

flask
psycopg2-binary

Dockerfile:

FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY api.py .
CMD ["python", "api.py"]

Build and run:

docker build -t my-api .

docker run -d \
  --name api \
  --network backend \
  my-api

# Connect to frontend too
docker network connect frontend api

Step 4: Create Web Frontend (index.html)

<!DOCTYPE html>
<html>
<head>
    <title>My App</title>
</head>
<body>
    <h1>Multi-Container App</h1>
    <button onclick="fetchData()">Get Database Info</button>
    <pre id="result"></pre>
    
    <script>
        async function fetchData() {
            const response = await fetch('http://localhost:5000/api/users');
            const data = await response.json();
            document.getElementById('result').textContent = JSON.stringify(data, null, 2);
        }
    </script>
</body>
</html>

nginx.conf:

server {
    listen 80;
    
    location / {
        root /usr/share/nginx/html;
        index index.html;
    }
    
    location /api/ {
        proxy_pass http://api:5000/api/;
    }
}

Dockerfile:

FROM nginx:alpine
COPY index.html /usr/share/nginx/html/
COPY nginx.conf /etc/nginx/conf.d/default.conf

Build and run:

docker build -t my-web .

docker run -d \
  --name web \
  --network frontend \
  -p 8080:80 \
  my-web

Step 5: Test the application

Open browser: http://localhost:8080

Click button → Data from database! ✓

Architecture:

User Browser
     ↓
Web Container (frontend network)
     ↓
API Container (frontend + backend networks)
     ↓
Database Container (backend network)

✓ Web can't directly access database
✓ API bridges the networks
✓ Proper isolation!

Part 10: Network Commands Reference

Complete Network Commands

# List networks
docker network ls

# Create network
docker network create NETWORK_NAME

# Inspect network
docker network inspect NETWORK_NAME

# Remove network
docker network rm NETWORK_NAME

# Remove all unused networks
docker network prune

# Connect container to network
docker network connect NETWORK_NAME CONTAINER_NAME

# Disconnect container from network
docker network disconnect NETWORK_NAME CONTAINER_NAME

Creating Networks with Options

# Create with custom subnet
docker network create --subnet=192.168.1.0/24 my-net

# Create with custom gateway
docker network create --gateway=192.168.1.1 my-net

# Create with driver
docker network create --driver bridge my-net

# Create with labels
docker network create --label env=prod my-net

Summary

What We Learned:

✅ Container networking basics
✅ Default bridge network
✅ Custom bridge networks
✅ DNS resolution in custom networks
✅ Port publishing/mapping
✅ Network types (bridge, host, none)
✅ Multiple networks per container
✅ Network isolation patterns
✅ Multi-container applications
✅ Service discovery

Key Concepts:

1. Use custom networks for container communication
2. Container names = Hostnames (with custom networks)
3. Port publishing exposes containers to outside
4. Multiple networks = Network isolation
5. Frontend/Backend pattern for security

Best Practices:

✓ Always use custom networks (not default bridge)
✓ Use container names (not IP addresses)
✓ Separate frontend/backend networks
✓ Only publish ports that need external access
✓ Use network aliases for flexibility

Excellent! You now understand Docker networking!

This completes Phase 6: Docker Networking!

🎉 Congratulations! You've completed the Basic Docker Roadmap!

You now know:

  • ✅ Phase 1: Understanding Docker (Why, What, Architecture)
  • ✅ Phase 2: Installation & First Steps
  • ✅ Phase 3: Working with Images
  • ✅ Phase 4: Creating Your Own Images (Dockerfile)
  • ✅ Phase 5: Container Data Management (Volumes)
  • ✅ Phase 6: Docker Networking

Container Data Management (Volumes)

Now let's learn about one of the most important topics in Docker - how to manage data in containers.


Part 1: The Container Data Problem

Understanding Container Filesystem

Important Concept: Containers are EPHEMERAL (temporary)

What does this mean?

When you create a container:
├── It has its own filesystem
├── You can create/modify files inside
├── Everything works normally

When you delete the container:
├── ALL data inside is lost! ✗
├── Files gone forever
└── No way to recover

Demonstrating the Problem

Let's see this in action!

Step 1: Run Ubuntu container and create a file

docker run -it --name test-container ubuntu bash

Inside the container:

# You're now inside Ubuntu container
# Create a file
echo "Important data!" > /data.txt

# Verify it exists
cat /data.txt
# Output: Important data!

# Exit container
exit

Step 2: Start the same container again

docker start test-container
docker exec -it test-container bash

Inside container:

# Check if file still exists
cat /data.txt
# Output: Important data!

# File is still there! ✓
exit

Step 3: Remove and create new container

# Remove the container
docker rm test-container

# Create a new container (same image)
docker run -it --name test-container2 ubuntu bash

Inside new container:

# Try to find the file
cat /data.txt
# Error: No such file or directory ✗

# File is GONE! ✗

What happened?

Container 1:
├── Created data.txt
├── Data stored in container's writable layer
└── Removed → Data lost forever! ✗

Container 2:
├── Fresh container from same image
├── No data from Container 1
└── Starting from scratch

Problem: Data is tied to container lifecycle!

Real-World Problem Scenarios

Scenario 1: Database Container

Run MySQL container:
├── Create database
├── Add tables
├── Insert 1000 customer records

Container crashes:
├── Restart container → Data still there ✓

Accidentally delete container:
├── All data GONE! ✗
├── 1000 customer records lost!
└── Disaster! ✗

Scenario 2: Web Application

Upload feature:
├── Users upload photos
├── Photos saved in /uploads/ inside container

Update application (new container):
├── Deploy new version
├── Remove old container
├── All uploaded photos GONE! ✗
└── Users angry! ✗

Scenario 3: Log Files

Application writes logs:
├── Debug logs in /var/log/app/
├── Error logs accumulating

Container deleted:
├── All logs lost ✗
├── Can't debug past issues ✗
└── No audit trail ✗

The Solution: Docker Volumes

Volumes = Persistent storage outside the container

Think of it as:

Container = Temporary hotel room
├── You stay there temporarily
├── When you check out, room is cleaned
└── Your stuff is gone

Volume = Your storage unit
├── Permanent storage space
├── Exists outside the hotel
├── Your stuff stays even after checkout
└── Can access from any room (container)

Visual:

WITHOUT Volumes:
┌──────────────────┐
│   Container      │
│                  │
│  /data/          │ ← Data inside
│  └── files       │
└──────────────────┘
     ↓ Delete
    Data lost! ✗


WITH Volumes:
┌──────────────────┐     ┌──────────────┐
│   Container      │     │   Volume     │
│                  │────→│              │
│  /data/ (mount)  │     │  Real data   │
│                  │     │  stored here │
└──────────────────┘     └──────────────┘
     ↓ Delete                   ↓
  Container gone            Data safe! ✓

Part 2: What are Docker Volumes?

Simple Definition

Volume = A storage space managed by Docker that exists outside containers

Key Characteristics:

Volumes are:
├── Persistent (survive container deletion)
├── Managed by Docker
├── Can be shared between containers
├── Independent of container lifecycle
├── Stored on host machine
└── Easy to backup

How Volumes Work

Conceptual Model:

Host Machine (Your Computer):
├── Docker manages a special directory
├── /var/lib/docker/volumes/ (Linux)
├── This is where volume data is stored

Volume:
├── Named storage space
├── Like a hard drive managed by Docker

Container:
├── Mounts (connects to) the volume
├── Sees volume as a directory
└── Reads/writes to volume = permanent storage

Visual:

Your Computer Filesystem:
/var/lib/docker/volumes/
├── my-volume/
│   └── _data/
│       ├── file1.txt
│       └── file2.txt
└── db-volume/
    └── _data/
        └── database.db

Container:
/app/data/ ──(mounted)──→ my-volume
                          ↓
                    Actual storage location

Part 3: Creating and Using Volumes

Creating a Volume

Syntax:

docker volume create VOLUME_NAME

Example:

docker volume create my-data

Output:

my-data

That's it! Volume created! ✓


Listing Volumes

docker volume ls

Output:

DRIVER    VOLUME NAME
local     my-data

Inspecting a Volume

docker volume inspect my-data

Output:

[
    {
        "CreatedAt": "2026-02-24T10:30:00Z",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/my-data/_data",
        "Name": "my-data",
        "Options": {},
        "Scope": "local"
    }
]

Important field:

Mountpoint: "/var/lib/docker/volumes/my-data/_data"
                    ↑
            Where data is actually stored on your computer

Using a Volume with Container

Mount volume when running container:

Syntax:

docker run -v VOLUME_NAME:/path/in/container IMAGE

Example:

docker run -it -v my-data:/data ubuntu bash

What this does:

-v my-data:/data
   ↑       ↑
   │       └── Path inside container
   └── Volume name

Container sees /data/ directory
/data/ is actually stored in my-data volume
Data persists even after container is deleted!

Practical Example: Persistent Data

Let's see volumes in action!

Step 1: Create a volume

docker volume create persistent-data

Step 2: Run container with volume

docker run -it --name container1 -v persistent-data:/data ubuntu bash

Inside container:

# Create some files
echo "This data will persist!" > /data/important.txt
echo "User database" > /data/users.db
echo "Configuration" > /data/config.json

# List files
ls /data/
# Output: important.txt  users.db  config.json

# Exit
exit

Step 3: Delete the container

docker rm container1

Step 4: Create NEW container with SAME volume

docker run -it --name container2 -v persistent-data:/data ubuntu bash

Inside new container:

# Check if data exists
ls /data/
# Output: important.txt  users.db  config.json

cat /data/important.txt
# Output: This data will persist!

# Data is still there! ✓
# Even though we deleted container1!

🎉 Volume preserved the data!


Multiple Containers Sharing a Volume

Volumes can be shared between containers!

Terminal 1:

docker run -it --name writer -v shared-data:/data ubuntu bash

Inside writer container:

# Write data
echo "Message from writer" > /data/message.txt

# Keep container running
sleep infinity

Terminal 2 (new terminal):

docker run -it --name reader -v shared-data:/data ubuntu bash

Inside reader container:

# Read data written by writer
cat /data/message.txt
# Output: Message from writer

# Data shared between containers! ✓

Use case:

Example: Microservices sharing data

Container 1 (Producer):
└── Writes log files to /logs

Container 2 (Analyzer):
└── Reads log files from /logs

Both mount same volume:
└── Data flows between them! ✓

Part 4: Bind Mounts

What are Bind Mounts?

Bind Mount = Mount a directory from YOUR computer into a container

Difference from Volumes:

Volume:
├── Managed by Docker
├── Stored in Docker's directory
└── docker volume create my-vol

Bind Mount:
├── You choose the directory
├── Any directory on your computer
└── Mount your own folder

Visual:

Volume (Managed by Docker):
Your Computer                Container
Docker manages:             
/var/lib/docker/volumes/    
└── my-vol/_data/     ────→ /data/
    └── files               

Bind Mount (You manage):
Your Computer                Container
Your directory:
C:\Users\You\project\       
└── code/             ────→ /app/
    └── files

Creating Bind Mounts

Syntax:

docker run -v /host/path:/container/path IMAGE

Windows example:

docker run -v C:\Users\YourName\myapp:/app ubuntu

Absolute path required!


Practical Example: Development Workflow

This is EXTREMELY useful for development!

Scenario: Developing a Python app

Step 1: Create project directory

mkdir C:\Users\YourName\python-app
cd C:\Users\YourName\python-app

Step 2: Create app.py

# app.py
print("Hello from Python!")
print("Version 1.0")

Step 3: Run with bind mount

docker run -it -v C:\Users\YourName\python-app:/app python:3.11 bash

Inside container:

cd /app
ls
# Output: app.py

python app.py
# Output: 
# Hello from Python!
# Version 1.0

# Keep container running
sleep infinity

Step 4: Edit file on YOUR computer (not in container)

Open app.py in your editor and change it:

# app.py
print("Hello from Python!")
print("Version 2.0 - Updated!")
print("New feature added!")

Save the file

Step 5: Run again in container (same container still running)

# Still inside the container
python app.py
# Output:
# Hello from Python!
# Version 2.0 - Updated!
# New feature added!

# Changes reflected immediately! ✓

What happened?

File on your computer:
C:\Users\YourName\python-app\app.py
                    ↓
                (bind mount)
                    ↓
File in container:
/app/app.py

They're the SAME file!
Edit on computer → Changes in container immediately! ✓

Bind Mount Benefits for Development

Traditional development:

Without bind mount:
1. Edit code on computer
2. Copy code into container (slow)
3. Test
4. Find bug
5. Exit container
6. Edit code again
7. Copy into container again (slow)
8. Repeat... ✗

With bind mount:

With bind mount:
1. Edit code on computer
2. Changes instantly in container ✓
3. Test immediately
4. Edit again
5. Test immediately
6. Fast iteration! ✓

Modern Bind Mount Syntax

Old syntax:

docker run -v C:\path:/container/path image

New syntax (recommended):

docker run --mount type=bind,source=C:\path,target=/container/path image

Example:

docker run --mount type=bind,source=C:\Users\YourName\myapp,target=/app python:3.11

Both work, but --mount is more explicit and clear.


Part 5: Volume vs Bind Mount - When to Use What?

Comparison

┌─────────────────────────────────────────────────────┐
│              VOLUMES vs BIND MOUNTS                 │
├─────────────────────────────────────────────────────┤
│                                                     │
│  VOLUMES:                                          │
│  ✓ Managed by Docker                              │
│  ✓ Better for production                          │
│  ✓ Works on all platforms                         │
│  ✓ Easy to backup                                 │
│  ✓ Can be shared easily                           │
│  ✗ Need docker volume commands to manage          │
│                                                     │
│  BIND MOUNTS:                                      │
│  ✓ Direct access to files                         │
│  ✓ Great for development                          │
│  ✓ Easy to edit files                             │
│  ✓ No docker commands needed                      │
│  ✗ Path must exist on host                        │
│  ✗ Platform-specific paths                        │
│                                                     │
└─────────────────────────────────────────────────────┘

When to Use Volumes

Use volumes for:

✓ Database data
  └── MySQL, PostgreSQL, MongoDB

✓ Production data
  └── Uploaded files, generated reports

✓ Data that must persist
  └── User data, configurations

✓ Shared data between containers
  └── Microservices communication

✓ Backups
  └── Easy to backup entire volume

Example: Database

docker run -d \
  --name mysql-db \
  -v mysql-data:/var/lib/mysql \
  -e MYSQL_ROOT_PASSWORD=secret \
  mysql:8.0

When to Use Bind Mounts

Use bind mounts for:

✓ Development
  └── Edit code on computer, test in container

✓ Configuration files
  └── nginx.conf, app config

✓ Source code during development
  └── Live reload

✓ When you need direct file access
  └── Easy to edit/view files

Example: Development

docker run -d \
  -v C:\Users\You\myapp:/app \
  -p 5000:5000 \
  python:3.11 \
  python /app/app.py

Part 6: Real-World Examples

Example 1: MySQL Database with Volume

Run MySQL with persistent data:

# Create volume for database
docker volume create mysql-data

# Run MySQL container
docker run -d \
  --name mysql-db \
  -v mysql-data:/var/lib/mysql \
  -e MYSQL_ROOT_PASSWORD=mypassword \
  -e MYSQL_DATABASE=myapp \
  -p 3306:3306 \
  mysql:8.0

What happens:

Container created:
├── MySQL running
├── Creates database files
└── Stored in mysql-data volume

Stop/Remove container:
├── Container gone
└── Data safe in volume ✓

Start new container with same volume:
├── All databases restored ✓
└── No data loss ✓

Test it:

# Connect to MySQL
docker exec -it mysql-db mysql -uroot -pmypassword

# Inside MySQL:
CREATE TABLE users (id INT, name VARCHAR(50));
INSERT INTO users VALUES (1, 'Alice');
SELECT * FROM users;
# Data created ✓

exit

# Remove container
docker stop mysql-db
docker rm mysql-db

# Create new container with same volume
docker run -d \
  --name mysql-db-new \
  -v mysql-data:/var/lib/mysql \
  -e MYSQL_ROOT_PASSWORD=mypassword \
  mysql:8.0

# Wait 10 seconds for MySQL to start
# Connect again
docker exec -it mysql-db-new mysql -uroot -pmypassword myapp

# Check data
SELECT * FROM users;
# Output: 1 | Alice
# Data persisted! ✓

Example 2: Web App Development with Bind Mount

Create a simple web app:

Directory structure:

my-website/
├── index.html
├── style.css
└── app.js

index.html:

<!DOCTYPE html>
<html>
<head>
    <title>My App</title>
    <link rel="stylesheet" href="style.css">
</head>
<body>
    <h1>Hello Docker!</h1>
    <p id="message">Loading...</p>
    <script src="app.js"></script>
</body>
</html>

style.css:

body {
    font-family: Arial;
    background-color: #f0f0f0;
    padding: 20px;
}

app.js:

document.getElementById('message').textContent = 'Version 1.0';

Run with bind mount:

docker run -d \
  --name web-dev \
  -v C:\Users\YourName\my-website:/usr/share/nginx/html \
  -p 8080:80 \
  nginx:alpine

Access: http://localhost:8080

Now edit files on your computer:

Change app.js:

document.getElementById('message').textContent = 'Version 2.0 - UPDATED!';

Refresh browser → Changes appear immediately! ✓

No need to rebuild or restart container!


Example 3: Sharing Data Between Containers

Scenario: Log producer and analyzer

Create shared volume:

docker volume create shared-logs

Container 1: Producer (generates logs)

docker run -d \
  --name log-producer \
  -v shared-logs:/logs \
  ubuntu \
  bash -c "while true; do echo \"Log entry: $(date)\" >> /logs/app.log; sleep 5; done"

Container 2: Analyzer (reads logs)

docker run -it \
  --name log-analyzer \
  -v shared-logs:/logs \
  ubuntu \
  bash

Inside analyzer:

# Watch logs in real-time
tail -f /logs/app.log

# Output:
# Log entry: Mon Feb 24 10:30:00 UTC 2026
# Log entry: Mon Feb 24 10:30:05 UTC 2026
# Log entry: Mon Feb 24 10:30:10 UTC 2026
# ... keeps updating

# Both containers accessing same volume! ✓

Part 7: Volume Commands Reference

Complete Volume Commands

# Create volume
docker volume create VOLUME_NAME

# List volumes
docker volume ls

# Inspect volume (see details)
docker volume inspect VOLUME_NAME

# Remove volume
docker volume rm VOLUME_NAME

# Remove all unused volumes
docker volume prune

# Remove volume with force
docker volume rm -f VOLUME_NAME

Using Volumes with Containers

# Run with named volume
docker run -v VOLUME_NAME:/path IMAGE

# Run with bind mount (absolute path)
docker run -v /host/path:/container/path IMAGE

# Run with bind mount (current directory)
docker run -v ${PWD}:/app IMAGE

# Multiple volumes
docker run -v vol1:/data1 -v vol2:/data2 IMAGE

# Read-only volume
docker run -v VOLUME_NAME:/path:ro IMAGE
# :ro = read-only

Modern Mount Syntax

# Volume mount
docker run --mount type=volume,source=VOLUME_NAME,target=/path IMAGE

# Bind mount
docker run --mount type=bind,source=/host/path,target=/path IMAGE

# Read-only mount
docker run --mount type=volume,source=VOL,target=/path,readonly IMAGE

Part 8: Anonymous Volumes

What are Anonymous Volumes?

Anonymous Volume = Volume without a name

Created automatically by Docker when you don't specify name:

docker run -v /data ubuntu
#             ↑
#        No name = anonymous volume

Docker generates random name:

VOLUME NAME
a1b2c3d4e5f6...

When Anonymous Volumes are Used

Example: Some images create anonymous volumes by default

# In Dockerfile
VOLUME /data

When container runs, creates anonymous volume automatically.


Problem with Anonymous Volumes

docker run image1
# Creates anonymous volume: abc123

docker run image1
# Creates ANOTHER anonymous volume: def456

docker run image1
# Creates ANOTHER anonymous volume: ghi789

Result:
├── 3 containers
├── 3 anonymous volumes
└── Hard to manage! ✗

Better: Use named volumes!

docker run -v my-data:/data image
# Same named volume reused ✓

Part 9: Backing Up and Restoring Volumes

Backup a Volume

Method: Use a temporary container to tar the volume

# Backup volume to tar file
docker run --rm -v VOLUME_NAME:/data -v ${PWD}:/backup ubuntu tar czf /backup/backup.tar.gz /data

Explanation:

--rm                        = Remove container after done
-v VOLUME_NAME:/data       = Mount volume to backup
-v ${PWD}:/backup          = Mount current directory
ubuntu                      = Use Ubuntu image
tar czf /backup/backup.tar.gz /data = Compress /data to backup file

Example:

# Backup mysql-data volume
docker run --rm \
  -v mysql-data:/data \
  -v C:\Users\You\backups:/backup \
  ubuntu \
  tar czf /backup/mysql-backup.tar.gz /data

Restore a Volume

# Create new volume
docker volume create restored-data

# Restore from backup
docker run --rm \
  -v restored-data:/data \
  -v ${PWD}:/backup \
  ubuntu \
  bash -c "cd /data && tar xzf /backup/backup.tar.gz --strip 1"

Part 10: Cleaning Up Volumes

Remove Single Volume

# Must stop/remove containers using it first
docker volume rm VOLUME_NAME

If volume is in use:

Error: volume is in use

Solution:
1. docker ps -a (find containers using volume)
2. docker rm CONTAINER (remove those containers)
3. docker volume rm VOLUME_NAME (now works)

Remove All Unused Volumes

docker volume prune

Output:

WARNING! This will remove all local volumes not used by at least one container.
Are you sure you want to continue? [y/N] y

Deleted Volumes:
volume1
volume2
anonymous-volume-abc123

Total reclaimed space: 2.5GB

Be careful! This removes data permanently!


Practice Exercises

Exercise 1: Persistent Counter

Create a container that counts:

# Create volume
docker volume create counter-data

# Run container
docker run -it -v counter-data:/data ubuntu bash

Inside container:

# Create counter file
echo "0" > /data/count.txt

# Increment counter
COUNT=$(cat /data/count.txt)
COUNT=$((COUNT + 1))
echo $COUNT > /data/count.txt
echo "Count: $COUNT"

exit

Run again (multiple times):

docker run -it -v counter-data:/data ubuntu bash

Inside container:

# Increment and show
COUNT=$(cat /data/count.txt)
COUNT=$((COUNT + 1))
echo $COUNT > /data/count.txt
echo "Count: $COUNT"

Each time, count increases! Data persists! ✓


Exercise 2: Development Environment

Setup:

# Create project directory
mkdir my-python-project
cd my-python-project

# Create app.py
echo print("Hello!") > app.py

# Run with bind mount
docker run -it -v ${PWD}:/app python:3.11 bash

Inside container:

cd /app
python app.py

Now edit app.py on your computer, run again in container → See changes!


Summary

What We Learned:

✅ Container data is temporary by default
✅ Volumes provide persistent storage
✅ Three types of mounts:
   ├── Named volumes (managed by Docker)
   ├── Anonymous volumes (random names)
   └── Bind mounts (your directories)
✅ When to use volumes vs bind mounts
✅ Creating and managing volumes
✅ Sharing data between containers
✅ Backing up and restoring
✅ Real-world examples

Key Takeaways:

Volumes:
├── Use for production data
├── Database storage
├── Persistent application data
└── Easy backups

Bind Mounts:
├── Use for development
├── Source code
├── Configuration files
└── Direct file access

Complete Beginner Guide: Deploy HTML/CSS/JS Website with Docker, Docker Hub, AWS & CI/CD

WHAT WE ARE BUILDING You will write a simple website (HTML + CSS + JS), put it inside Docker, upload it to Docker Hub, host it on AWS, and s...