dailytutorfor.you
DevOps & Cloud

Docker from Base to Mahir: Full Guide to Developer 2026

Control the Docker from the basic concept of continuation to advanced techniques such as multi@-@ stage buildings, compose orchestration, and production development. A practical guide with an example of a code that can be executed instantly.

15 min read2026-03-19

Docker from Basics to Mastery: Complete Guide for Developers 2026

Introduction

Have you ever experienced the classic problem: "It works fine on my laptop, but errors on the server"? Or spent hours setting up a development environment, only for a new colleague to repeat the same process? Docker is here to end all that.

Docker is a containerization platform that allows you to package applications along with all their dependencies into a standard unit called a container. These containers can run anywhere—developer laptops, testing servers, or production clouds—with 100% consistent behavior.

Simple analogy: Think of a container like a frozen food package. Inside, there are ingredients, seasonings, and complete cooking instructions. Whoever cooks it, in any kitchen, the result will be exactly the same.

Why Docker matters in 2026?

  • 85%+ companies have adopted containerization
  • Kubernetes and container orchestration have become industry standards
  • CI/CD pipelines almost always use Docker
  • Microservices architecture heavily depends on containers
  • Development environments become portable and reproducible

Prerequisites

Before starting, make sure you have:

  • Operating System: Windows 10/11 (with WSL2), macOS, or Linux
  • Minimum 4GB RAM (8GB+ recommended)
  • Basic command line/terminal knowledge
  • Basic understanding of applications and dependencies

Tools to Install

  1. Docker Desktop (Windows/macOS) or Docker Engine (Linux)
  2. Terminal/Command Prompt
  3. Text editor (VS Code, Vim, or anything)

Core Concepts

What is a Container?

A container is a software unit that packages code and all its dependencies so applications can run quickly and reliably from one environment to another.

Container vs Virtual Machine Differences:

┌─────────────────────────────────────────────────────────────────┐ │ VIRTUAL MACHINE │ ├─────────────────────────────────────────────────────────────────┤ │ ┌───────────┐ ┌───────────┐ ┌───────────┐ │ │ │ App A │ │ App B │ │ App C │ │ │ ├───────────┤ ├───────────┤ ├───────────┤ │ │ │ Bins/Libs│ │ Bins/Libs│ │ Bins/Libs│ │ │ ├───────────┤ ├───────────┤ ├───────────┤ │ │ │ Guest OS │ │ Guest OS │ │ Guest OS │ (HEAVY!) │ │ └───────────┘ └───────────┘ └───────────┘ │ │ ┌─────────────────────────────────────────┐ │ │ │ HYPERVISOR │ │ │ └─────────────────────────────────────────┘ │ │ ┌─────────────────────────────────────────┐ │ │ │ HOST OS │ │ │ └─────────────────────────────────────────┘ │ │ ┌─────────────────────────────────────────┐ │ │ │ PHYSICAL HARDWARE │ │ │ └─────────────────────────────────────────┘ │ └─────────────────────────────────────────────────────────────────┘ ┌─────────────────────────────────────────────────────────────────┐ │ DOCKER CONTAINER │ ├─────────────────────────────────────────────────────────────────┤ │ ┌───────────┐ ┌───────────┐ ┌───────────┐ │ │ │ App A │ │ App B │ │ App C │ │ │ ├───────────┤ ├───────────┤ ├───────────┤ │ │ │ Bins/Libs│ │ Bins/Libs│ │ Bins/Libs│ (LIGHTWEIGHT!) │ │ └───────────┘ └───────────┘ └───────────┘ │ │ ┌─────────────────────────────────────────┐ │ │ │ DOCKER ENGINE │ │ │ └─────────────────────────────────────────┘ │ │ ┌─────────────────────────────────────────┐ │ │ │ HOST OS │ │ │ └─────────────────────────────────────────┘ │ │ ┌─────────────────────────────────────────┐ │ │ │ PHYSICAL HARDWARE │ │ │ └─────────────────────────────────────────┘ │ └─────────────────────────────────────────────────────────────────┘

Key Differences:

AspectVirtual MachineContainer
Boot timeMinutesSeconds/milliseconds
SizeGBMB
PerformanceHas overheadNear-native
IsolationFull isolationProcess isolation
Resource usageHighEfficient

Main Docker Components

1. Dockerfile Blueprint/recipe for creating images. Contains step-by-step instructions.

2. Image Read-only template containing OS, libraries, and applications. Created from Dockerfile.

3. Container Running instance of an image. Can be created, started, stopped, and deleted.

4. Docker Engine Runtime that runs containers. Consists of daemon, CLI, and API.

5. Docker Hub / Registry Repository for storing and sharing images.

Docker Architecture

┌─────────────────────────────────────────────────────────────────┐ │ DOCKER ARCHITECTURE │ ├─────────────────────────────────────────────────────────────────┤ │ │ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │ │ CLIENT │ │ CLIENT │ │ CLIENT │ │ │ │ (docker CLI)│ │ (Docker Desktop)│ │ (API calls)│ │ │ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │ │ │ │ │ │ │ └────────────────────┼────────────────────┘ │ │ │ │ │ ▼ │ │ ┌─────────────────────────────────────────┐ │ │ │ DOCKER DAEMON │ │ │ │ (docker daemon) │ │ │ └─────────────────────────────────────────┘ │ │ │ │ │ ┌────────────────────┼────────────────────┐ │ │ │ │ │ │ │ ▼ ▼ ▼ │ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │ │ IMAGES │ │ CONTAINERS │ │ NETWORKS │ │ │ │ (templates)│ │ (instances) │ │ (connectivity)│ │ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │ │ │ ┌─────────────────────────────────────────┐ │ │ │ VOLUMES (persistent data) │ │ │ └─────────────────────────────────────────┘ │ │ │ │ ┌─────────────────────────────────────────┐ │ │ │ REGISTRY (Docker Hub, ECR, etc) │ │ │ └─────────────────────────────────────────┘ │ │ │ └─────────────────────────────────────────────────────────────────┘

Step-by-Step: Docker Installation

Windows (with WSL2)

  1. Enable WSL2:
wsl --install
  1. Install Docker Desktop from docker.com

  2. Verify installation:

docker --version docker run hello-world

macOS

# Via Homebrew (recommended) brew install --cask docker # Or download from docker.com # Open Docker.app and follow the setup wizard

Linux (Ubuntu/Debian)

# Update packages sudo apt-get update # Install dependencies sudo apt-get install ca-certificates curl gnupg # Add Docker's official GPG key sudo install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg sudo chmod a+r /etc/apt/keyrings/docker.gpg # Add repository echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null # Install Docker sudo apt-get update sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin # Add user to docker group sudo usermod -aG docker $USER # Start Docker sudo systemctl start docker sudo systemctl enable docker

Step-by-Step: Docker Basics

1. Hello World

# Run your first container docker run hello-world # Output: # Hello from Docker! # This message shows that your installation appears to be working correctly.

2. Pull and Run Images

# Pull image from registry docker pull nginx:latest # Run container docker run -d -p 8080:80 nginx # -d: detached mode (background) # -p 8080:80: map host port 8080 to container port 80 # Open browser to http://localhost:8080

3. Manage Containers

# List running containers docker ps # List all containers (including stopped) docker ps -a # Stop container docker stop <container_id> # Start container docker start <container_id> # Remove container docker rm <container_id> # Force remove running container docker rm -f <container_id>

4. Manage Images

# List images docker images # Remove image docker rmi <image_id> # Remove unused images docker image prune # Build image from Dockerfile docker build -t my-app:1.0 .

Step-by-Step: Creating a Dockerfile

Example: Node.js Application

Project structure:

my-app/ ├── src/ │ └── index.js ├── package.json └── Dockerfile

package.json:

{ "name": "my-app", "version": "1.0.0", "main": "src/index.js", "scripts": { "start": "node src/index.js" }, "dependencies": { "express": "^4.18.2" } }

src/index.js:

const express = require('express'); const app = express(); const PORT = process.env.PORT || 3000; app.get('/', (req, res) => { res.json({ message: 'Hello from Docker!', timestamp: new Date().toISOString() }); }); app.get('/health', (req, res) => { res.json({ status: 'healthy' }); }); app.listen(PORT, () => { console.log(`Server running on port ${PORT}`); });

Dockerfile:

# Use official Node.js image as base FROM node:20-alpine # Set working directory WORKDIR /app # Copy package files first (for better caching) COPY package*.json ./ # Install dependencies RUN npm ci --only=production # Copy application code COPY src/ ./src/ # Set environment variable ENV NODE_ENV=production ENV PORT=3000 # Expose port EXPOSE 3000 # Create non-root user for security RUN addgroup -g 1001 -S nodejs && \ adduser -S nodejs -u 1001 # Change ownership RUN chown -R nodejs:nodejs /app # Switch to non-root user USER nodejs # Health check HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \ CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1 # Start application CMD ["node", "src/index.js"]

Build and Run

# Build image docker build -t my-app:1.0 . # Run container docker run -d -p 3000:3000 --name my-app-container my-app:1.0 # Test curl http://localhost:3000 # Check logs docker logs my-app-container # Follow logs (live) docker logs -f my-app-container

Step-by-Step: Docker Compose

Docker Compose allows you to define and run multi-container applications.

Example: Web App with Database

docker-compose.yml:

version: '3.8' services: # Application service app: build: . container_name: my-app ports: - "3000:3000" environment: - NODE_ENV=production - DATABASE_URL=postgresql://user:password@db:5432/mydb depends_on: db: condition: service_healthy networks: - app-network restart: unless-stopped # Database service db: image: postgres:15-alpine container_name: my-db environment: POSTGRES_USER: user POSTGRES_PASSWORD: password POSTGRES_DB: mydb volumes: - postgres-data:/var/lib/postgresql/data networks: - app-network healthcheck: test: ["CMD-SHELL", "pg_isready -U user -d mydb"] interval: 10s timeout: 5s retries: 5 restart: unless-stopped # Redis cache (optional) redis: image: redis:7-alpine container_name: my-redis ports: - "6379:6379" volumes: - redis-data:/data networks: - app-network restart: unless-stopped networks: app-network: driver: bridge volumes: postgres-data: redis-data:

Compose Commands

# Start all services docker compose up -d # View logs docker compose logs # Follow logs for specific service docker compose logs -f app # Stop all services docker compose down # Stop and remove volumes docker compose down -v # Restart specific service docker compose restart app # Scale service docker compose up -d --scale app=3

Deep Dive: Multi-Stage Builds

Multi-stage builds reduce the final image size by separating build environment and runtime environment.

# Stage 1: Build stage FROM node:20-alpine AS builder WORKDIR /app # Copy package files COPY package*.json ./ # Install ALL dependencies (including devDependencies) RUN npm ci# Copy source code COPY . . # Build application (if using TypeScript/webpack/etc) RUN npm run build # Stage 2: Production stage FROM node:20-alpine AS production WORKDIR /app # Copy only production dependencies COPY package*.json ./ RUN npm ci --only=production # Copy built artifacts from builder stage COPY --from=builder /app/dist ./dist # Set environment ENV NODE_ENV=production # Expose port EXPOSE 3000 # Start application CMD ["node", "dist/index.js"]

Result:

  • Builder stage: ~500MB (with build tools)
  • Production stage: ~100MB (runtime only)

Deep Dive: Docker Networking

Network Types

1. Bridge Network (default)

# Create custom bridge network docker network create my-network # Run container in network docker run -d --name app1 --network my-network my-app docker run -d --name app2 --network my-network my-app # Containers can communicate by name # app1 can reach app2 at http://app2:3000

2. Host Network

# Container shares host's network stack docker run -d --network host nginx # No port mapping needed, directly on host ports

3. None Network

# Isolated container, no network access docker run -d --network none my-app

Network Commands

# List networks docker network ls # Inspect network docker network inspect my-network # Connect container to network docker network connect my-network container-name # Disconnect container docker network disconnect my-network container-name

Deep Dive: Volumes and Data Management

Types of Data Storage

1. Volumes (Recommended)

# Create volume docker volume create my-volume # Use volume in container docker run -v my-volume:/app/data my-app # List volumes docker volume ls # Inspect volume docker volume inspect my-volume # Remove volume docker volume rm my-volume

2. Bind Mounts

# Mount host directory docker run -v /host/path:/container/path my-app # Read-only mount docker run -v /host/path:/container/path:ro my-app

3. tmpfs (Temporary)

# In-memory storage (for sensitive data) docker run --tmpfs /tmp my-app

Volume in docker-compose.yml

services: app: volumes: # Named volume - app-data:/app/data # Bind mount - ./src:/app/src:ro # Anonymous volume - /app/node_modules # tmpfs - type: tmpfs target: /tmp volumes: app-data:

Deep Dive: Docker Security Best Practices

1. Run as Non-Root User

# Create user and group RUN groupadd -r appuser && useradd -r -g appuser appuser # Set ownership RUN chown -R appuser:appuser /app # Switch user USER appuser

2. Use Minimal Base Images

# Instead of: FROM node:20 (1GB+) # Use: FROM node:20-alpine (150MB) # Or even: FROM node:20-slim (200MB)

3. Scan for Vulnerabilities

# Using Docker Scout (built-in) docker scout quickview my-app:1.0 # Detailed CVE report docker scout cves my-app:1.0 # Compare with base image docker scout compare my-app:1.0 --to node:20-alpine

4. Use .dockerignore

# .dockerignore node_modules npm-debug.log Dockerfile docker-compose*.yml .git .gitignore README.md .env* *.test.js coverage .nyc_output

5. Don't Store Secrets in Images

# BAD - secrets in Dockerfile ENV DATABASE_PASSWORD=secret123 # GOOD - use secrets at runtime docker run -e DATABASE_PASSWORD=secret123 my-app # Or use Docker secrets (Swarm) docker secret create db_password ./password.txt docker service create --secret db_password my-app

Best Practices

1. Layer Optimization

# BAD - Multiple RUN commands RUN apt-get update RUN apt-get install -y curl RUN apt-get clean # GOOD - Single RUN command RUN apt-get update && \ apt-get install -y curl && \ apt-get clean && \ rm -rf /var/lib/apt/lists/*

2. Order Instructions by Change Frequency

# Instructions that rarely change go first FROM node:20-alpine WORKDIR /app # Dependencies rarely change COPY package*.json ./ RUN npm ci --only=production # Code changes frequently COPY . . CMD ["node", "index.js"]

3. Use Specific Tags

# BAD - latest tag FROM node:latest # GOOD - specific version FROM node:20.10.0-alpine

4. Health Checks

HEALTHCHECK --interval=30s --timeout=3s --retries=3 \ CMD curl -f http://localhost:3000/health || exit 1

5. Resource Limits

# Limit memory and CPU docker run -d \ --memory="512m" \ --cpus="0.5" \ my-app
# In docker-compose.yml services: app: deploy: resources: limits: cpus: '0.5' memory: 512M reservations: memory: 256M

Common Mistakes

❌ Mistake 1: Using :latest Tag

# BAD FROM node:latest # GOOD FROM node:20.10.0-alpine

Reason: latest can change anytime, causing non-reproducible builds.

❌ Mistake 2: Running as Root

# BAD - runs as root by default FROM node:20-alpine CMD ["node", "index.js"] # GOOD - explicit non-root user FROM node:20-alpine RUN addgroup -g 1001 appgroup && \ adduser -u 1001 -G appgroup -D appuser USER appuser CMD ["node", "index.js"]

❌ Mistake 3: Ignoring .dockerignore

Without .dockerignore, Docker will copy ALL files to build context, including node_modules which can be gigabytes.

❌ Mistake 4: Storing Secrets in Dockerfile

# VERY BAD ENV API_KEY=sk-1234567890 ENV DATABASE_PASSWORD=secret123 # GOOD - use environment variables at runtime # docker run -e API_KEY=xxx -e DATABASE_PASSWORD=yyy my-app

❌ Mistake 5: Not Using Multi-Stage Builds

# BAD - single stage, includes build tools in final image FROM node:20 COPY . . RUN npm install RUN npm run build CMD ["node", "dist/index.js"] # GOOD - multi-stage, final image only has runtime FROM node:20 AS builder COPY . . RUN npm install && npm run build FROM node:20-alpine COPY --from=builder /app/dist ./dist CMD ["node", "dist/index.js"]

❌ Mistake 6: Ignoring Container Logs

# Always check logs when debugging docker logs container-name # Follow logs in real-time docker logs -f container-name # Last 100 lines docker logs --tail 100 container-name

Advanced Tips

1. BuildKit for Performance

# Enable BuildKit DOCKER_BUILDKIT=1 docker build -t my-app . # Or set as default in daemon.json { "features": { "buildkit": true } }

2. Cache from Remote Registry

# Pull and use as cache docker pull my-registry.com/my-app:latest docker build --cache-from my-registry.com/my-app:latest -t my-app .

3. Distroless Images for Security

# Minimal attack surface FROM gcr.io/distroless/nodejs20-debian12 COPY app.js / CMD ["app.js"]

4. Docker Buildx for Multi-Platform

# Build for multiple architectures docker buildx build --platform linux/amd64,linux/arm64 -t my-app .

5. Debug Running Container

# Get shell inside container docker exec -it container-name /bin/sh # Or run new container with shell docker run -it --rm my-app /bin/sh

6. Export/Import Images

# Save image to tar file docker save -o my-app.tar my-app:1.0 # Load image from tar file docker load -i my-app.tar

Troubleshooting

Container Won't Start

# Check logs docker logs container-name # Check container status docker inspect container-name # Run interactively to debug docker run -it --rm my-app /bin/sh

Image Too Large

# Analyze image layers docker history my-app:1.0 # Use dive for detailed analysis dive my-app:1.0

Network Issues

# Check container network docker network inspect bridge # DNS issues - use custom DNS docker run --dns 8.8.8.8 my-app

Permission Denied

# Add user to docker group sudo usermod -aG docker $USER # Logout and login again

Disk Space Full

# Clean up unused resources docker system prune # Remove everything (careful!) docker system prune -a --volumes # Check disk usage docker system df

Summary & Next Steps

Key Takeaways

  1. Container vs VM - Containers are lighter, faster, and more efficient
  2. Image = Template - Read-only blueprint for creating containers
  3. Dockerfile = Recipe - Step-by-step instructions for building images
  4. Compose = Orchestrator - Manage multi-container applications
  5. Volumes = Persistence - Data survives container restart/removal
  6. Security matters - Non-root user, minimal images, no secrets

Docker Commands Cheat Sheet

CommandDescription
docker build -t name:tag .Build image
docker run -d -p 8080:80 imageRun container
docker psList running containers
docker logs containerView logs
docker exec -it container shShell access
docker stop containerStop container
docker rm containerRemove container
docker rmi imageRemove image
docker compose up -dStart services
docker compose downStop services
docker volume lsList volumes
docker network lsList networks

Next Steps

After mastering Docker basics, the next level to explore:

  1. Kubernetes - Container orchestration at scale
  2. Docker Swarm - Native Docker clustering
  3. CI/CD with Docker - GitHub Actions, GitLab CI, Jenkins
  4. Container Registry - Docker Hub, AWS ECR, Google GCR
  5. Monitoring - Prometheus, Grafana, ELK Stack

Recommended Resources


Conclusion

Docker has changed the way we develop, deploy, and scale applications. By understanding the core concepts—images, containers, Dockerfile, and Compose—you have a solid foundation for modern software development.

Remember:

  • Start small, iterate often
  • Always test in containers similar to production
  • Security is everyone's responsibility
  • Documentation is your friend

Happy containerizing! 🐳