“It works on my machine” — the phrase every developer dreads. Docker eliminates this problem entirely. This guide teaches you Docker from scratch with practical examples.
Table of Contents
Open Table of Contents
What Is Docker?
Docker is a platform that lets you package applications and their dependencies into standardized units called containers. A container includes everything your app needs to run: code, runtime, system libraries, and settings.
Think of it like shipping containers in the real world. Before standardized shipping containers, loading cargo was chaotic — different sizes, shapes, and handling requirements. Shipping containers solved this by providing a universal, portable format. Docker does the same for software.
Why Docker Matters
| Problem | Docker’s Solution |
|---|---|
| ”Works on my machine” syndrome | Identical environment everywhere |
| Complex dependency management | All dependencies packaged together |
| Slow onboarding for new developers | docker compose up and you’re running |
| Inconsistent dev/staging/production | Same container image across all environments |
| Resource-heavy virtual machines | Lightweight containers sharing the host OS kernel |
Containers vs Virtual Machines
This is the most important distinction to understand:
Virtual Machines (VMs)
- Run a complete operating system on top of a hypervisor
- Each VM includes its own kernel, drivers, and libraries
- Typically uses gigabytes of disk space and RAM
- Boot time: minutes
Containers
- Share the host operating system’s kernel
- Include only the application and its dependencies
- Typically uses megabytes of disk space
- Start time: seconds (often milliseconds)
Virtual Machine Architecture:
┌─────────┐ ┌─────────┐ ┌─────────┐
│ App A │ │ App B │ │ App C │
│ Libs │ │ Libs │ │ Libs │
│ Guest OS │ │ Guest OS │ │ Guest OS │ ← Each VM has full OS
└────┬─────┘ └────┬─────┘ └────┬─────┘
└────────────┼────────────┘
┌─────┴─────┐
│ Hypervisor │
│ Host OS │
│ Hardware │
└────────────┘
Container Architecture:
┌─────────┐ ┌─────────┐ ┌─────────┐
│ App A │ │ App B │ │ App C │
│ Libs │ │ Libs │ │ Libs │ ← Containers share OS
└────┬─────┘ └────┬─────┘ └────┬─────┘
└────────────┼────────────┘
┌───────┴───────┐
│ Docker Engine │
│ Host OS │
│ Hardware │
└────────────────┘
Installing Docker
macOS
Download and install Docker Desktop from docker.com. It includes Docker Engine, Docker CLI, and Docker Compose.
Windows
Install Docker Desktop with WSL 2 backend (recommended). This gives you native Linux container support on Windows.
Linux
# Ubuntu/Debian
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Verify installation
docker --version
docker compose version
Verify Your Installation
# Run the hello-world container
docker run hello-world
If you see “Hello from Docker!”, your installation is working correctly.
Core Docker Concepts
Image
A Docker image is a read-only template for creating containers. Think of it as a blueprint or a class in object-oriented programming. Images are built in layers, making them efficient to store and transfer.
Container
A container is a running instance of an image. You can create multiple containers from the same image. Containers are isolated from each other and the host system.
Dockerfile
A Dockerfile is a text file with instructions for building a Docker image. Each instruction creates a layer in the image.
Registry
A registry is a repository for Docker images. Docker Hub is the default public registry, similar to npm for JavaScript or PyPI for Python.
Volume
A volume is a mechanism for persisting data generated by containers. Without volumes, data is lost when a container is removed.
Dockerfile → (build) → Image → (run) → Container
↓
Docker Hub (push/pull)
Essential Docker Commands
Working with Images
# Download an image from Docker Hub
docker pull nginx
# List all local images
docker images
# Remove an image
docker rmi nginx
# Build an image from a Dockerfile
docker build -t my-app:1.0 .
Working with Containers
# Run a container (downloads image if not found locally)
docker run nginx
# Run in detached mode (background)
docker run -d --name my-nginx nginx
# Run with port mapping (host:container)
docker run -d -p 8080:80 nginx
# List running containers
docker ps
# List all containers (including stopped)
docker ps -a
# Stop a container
docker stop my-nginx
# Remove a container
docker rm my-nginx
# View container logs
docker logs my-nginx
# Execute a command inside a running container
docker exec -it my-nginx /bin/bash
Key Flags Explained
| Flag | Meaning | Example |
|---|---|---|
-d | Detached mode (background) | docker run -d nginx |
-p | Port mapping | -p 3000:80 (host 3000 → container 80) |
-v | Volume mount | -v ./data:/app/data |
--name | Name the container | --name my-app |
-e | Environment variable | -e NODE_ENV=production |
-it | Interactive terminal | docker exec -it container bash |
Writing Your First Dockerfile
Node.js Application Example
# Use an official Node.js runtime as base image
FROM node:20-alpine
# Set working directory inside the container
WORKDIR /app
# Copy package files first (for better layer caching)
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy application source code
COPY . .
# Expose the port the app runs on
EXPOSE 3000
# Define the command to run the app
CMD ["node", "server.js"]
Build and Run
# Build the image
docker build -t my-node-app .
# Run the container
docker run -d -p 3000:3000 my-node-app
# Test it
curl http://localhost:3000
Python Application Example
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
Understanding Layer Caching
Docker caches each layer. If a layer hasn’t changed, Docker reuses the cache. This is why we copy package.json before the source code — dependencies change less frequently than code, so npm install only reruns when dependencies actually change.
# ✅ Good: Dependencies cached separately
COPY package*.json ./
RUN npm ci
COPY . .
# ❌ Bad: Dependencies reinstalled on every code change
COPY . .
RUN npm ci
Docker Compose: Multi-Container Apps
Most real applications need multiple services (web server, database, cache). Docker Compose lets you define and run multi-container applications with a single YAML file.
Example: Web App + PostgreSQL + Redis
# docker-compose.yml
services:
web:
build: .
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgresql://user:pass@db:5432/myapp
- REDIS_URL=redis://cache:6379
depends_on:
- db
- cache
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: myapp
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
cache:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
postgres_data:
Docker Compose Commands
# Start all services
docker compose up
# Start in background
docker compose up -d
# Stop all services
docker compose down
# View logs
docker compose logs -f
# Rebuild after code changes
docker compose up --build
# Remove everything including volumes
docker compose down -v
Real-World Use Cases
1. Local Development Environment
Set up a complete development environment that any team member can run instantly:
git clone https://github.com/team/project.git
cd project
docker compose up
# Everything is running — app, database, cache, queue
2. Consistent CI/CD Pipelines
# GitHub Actions example
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: docker compose -f docker-compose.test.yml up --abort-on-container-exit
3. Microservices Architecture
Each service runs in its own container with its own dependencies, isolated from other services.
4. Database Management
Spin up databases instantly for testing without installing them on your machine:
# Need a PostgreSQL database? One command:
docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=secret postgres:16
# Need MongoDB? Same thing:
docker run -d -p 27017:27017 mongo:7
Best Practices
Dockerfile Best Practices
- Use specific base image tags —
node:20-alpinenotnode:latest - Minimize layers — Combine related RUN commands with
&& - Use
.dockerignore— Excludenode_modules,.git, logs - Run as non-root user — Improve security
- Use multi-stage builds — Reduce final image size
Multi-Stage Build Example
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
EXPOSE 3000
CMD ["node", "dist/server.js"]
.dockerignore Example
node_modules
.git
.env
*.log
dist
coverage
.DS_Store
Frequently Asked Questions (FAQ)
Q1. Is Docker free?
Docker Engine and Docker CLI are free and open source. Docker Desktop is free for personal use and small businesses (under 250 employees and under $10M revenue). Larger organizations need a paid subscription.
Q2. Does Docker slow down my application?
Container overhead is negligible — typically less than 1% performance impact. Containers share the host kernel and don’t need to emulate hardware like VMs do.
Q3. Can I use Docker on Windows?
Yes. Docker Desktop for Windows uses WSL 2 (Windows Subsystem for Linux) to run containers natively. Performance is excellent, nearly identical to running on Linux.
Q4. How is Docker different from Kubernetes?
Docker creates and runs containers. Kubernetes orchestrates containers at scale — managing deployment, scaling, load balancing, and self-healing across multiple machines. You learn Docker first, then Kubernetes when you need to manage many containers in production.
Q5. Should I use Docker in production?
Yes, Docker is widely used in production by companies of all sizes, from startups to Fortune 500 companies. Major cloud providers (AWS, Azure, GCP) all offer container hosting services. Combine Docker with an orchestrator like Kubernetes or a managed service like AWS ECS for production-grade deployments.
Conclusion
Docker has become an essential tool in modern software development. It solves the age-old “works on my machine” problem and makes it trivially easy to set up complex development environments.
Start with the basics: build a Dockerfile for one of your projects, run it as a container, then gradually add Docker Compose for multi-service setups. Once you’re comfortable, you’ll wonder how you ever developed without it.
Recommended Reading: