DevOps
Docker & Container Basics: Deployment Made Simple
📅 February 5, 2026
⏱️ 11 min read
✍️ By Anjani Raj
Introduction: From Local to Cloud with Confidence
One of the most common challenges in software deployment is the mismatch between local development environments and production servers. "It works on my machine" is a problem Docker solves. Docker packages your application with all its dependencies into a container—a lightweight, portable package that runs identically whether on your laptop or a cloud server.
This article explains Docker fundamentals and shows how to containerize applications for reliable, scalable deployments.
Understanding Containers: What and Why
Containers vs Virtual Machines
Containers are often confused with virtual machines. They're similar but different:
- Virtual Machines: Full operating system instances. Lots of overhead. Take minutes to start. Typical size: gigabytes.
- Containers: Lightweight process isolation. Share the host OS kernel. Start in milliseconds. Typical size: megabytes.
Containers offer the isolation benefits of VMs with the efficiency of processes. This makes them ideal for modern application deployment.
Images vs Containers
Understanding this distinction is crucial:
- Image: Blueprint containing application code, dependencies, and configuration. Immutable. Think of it as a class.
- Container: Running instance of an image. Mutable while running. Think of it as an object.
You build an image once, then run multiple containers from that image.
Dockerfile: Defining Your Container
Creating a Simple Dockerfile
A Dockerfile is a text file with instructions to build an image:
# Use official Node.js runtime as base image
FROM node:18-alpine
# Set working directory in container
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy application code
COPY . .
# Expose port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=40s --retries=3 \
CMD node healthcheck.js
# Run application
CMD ["node", "server.js"]
Understanding Dockerfile Layers
Each instruction creates a layer in the image. Layers are cached and reused, speeding up builds:
# Better: Dependencies change rarely
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
CMD ["node", "server.js"]
# Worse: Rebuilds dependencies even when only code changes
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN npm ci --only=production
CMD ["node", "server.js"]
Order instructions from least frequently changed to most frequently changed for optimal caching.
Building and Running Containers
Building an Image
Build an image from a Dockerfile:
# Build image
docker build -t my-app:1.0.0 .
# Tag with registry/repository format for Docker Hub
docker build -t username/my-app:1.0.0 .
# Build and tag with latest
docker build -t username/my-app:1.0.0 -t username/my-app:latest .
Running a Container
Run containers from images:
# Run container
docker run -p 3000:3000 my-app:1.0.0
# Run with environment variables
docker run -p 3000:3000 \
-e NODE_ENV=production \
-e DATABASE_URL=mongodb://... \
my-app:1.0.0
# Run in background (detached mode)
docker run -d -p 3000:3000 --name my-app-instance my-app:1.0.0
# View running containers
docker ps
# View logs
docker logs my-app-instance
# Stop container
docker stop my-app-instance
# Remove container
docker rm my-app-instance
Port Mapping: The syntax `-p 3000:3000` means "map port 3000 on the host to port 3000 in the container". Left side is host port, right side is container port.
Multi-Stage Builds: Optimized Production Images
Problem: Large Production Images
Building directly from source creates large images containing build tools and source code. Multi-stage builds fix this:
# Stage 1: Build
FROM node:18 as builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Runtime (small and efficient)
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
# Copy only built files from builder stage
COPY --from=builder /app/dist ./dist
CMD ["node", "dist/server.js"]
This produces a much smaller production image: the builder stage's build tools and source code aren't included in the final image.
Docker Compose: Multi-Container Applications
Problem: Running Multiple Services
Real applications need multiple services: web server, database, cache, message queue. Running each container manually is tedious. Docker Compose solves this:
# docker-compose.yml
version: '3.8'
services:
web:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=mongodb://db:27017/myapp
depends_on:
- db
- redis
volumes:
- ./logs:/app/logs
db:
image: mongo:5
ports:
- "27017:27017"
volumes:
- mongo_data:/data/db
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
mongo_data:
Running with Docker Compose
Docker Compose commands simplify multi-container orchestration:
# Start all services
docker-compose up
# Start in background
docker-compose up -d
# View logs
docker-compose logs -f web
# Stop services
docker-compose down
# Stop and remove volumes
docker-compose down -v
# Rebuild images
docker-compose build
# Run one-off command in service
docker-compose exec web npm test
Container Networking and Data Persistence
Networking Between Containers
Containers in the same Docker Compose file automatically connect via a network:
# Services can reference each other by name
# In web service, connect to database like this:
const mongoUri = 'mongodb://db:27017/myapp';
const redisClient = new Redis('redis://redis:6379');
Data Persistence with Volumes
By default, container data is lost when the container stops. Volumes persist data:
# Named volume (managed by Docker)
docker run -v my-data:/app/data my-app
# Bind mount (map host directory)
docker run -v /host/path:/container/path my-app
# In Docker Compose
volumes:
- mongo_data:/data/db # Named volume
- ./config:/app/config # Bind mount
Best Practices for Production Containers
Security: Running as Non-Root User
Containers running as root pose security risks. Run as unprivileged user:
FROM node:18-alpine
WORKDIR /app
# Create non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nodejs -u 1001
COPY package*.json ./
RUN npm ci --only=production
COPY . .
# Switch to non-root user
USER nodejs
EXPOSE 3000
CMD ["node", "server.js"]
Resource Limits
Prevent containers from consuming unlimited resources:
# docker-compose.yml
services:
web:
build: .
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
Health Checks
Tell Docker how to determine if a container is healthy:
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN npm ci --only=production
HEALTHCHECK --interval=30s --timeout=3s --retries=3 \
CMD npm run healthcheck
CMD ["node", "server.js"]
Deployment Workflow
Build, Push, Pull, Run
Typical deployment workflow:
# 1. Build image locally
docker build -t myregistry/myapp:1.0.0 .
# 2. Push to registry (Docker Hub, AWS ECR, etc.)
docker login
docker push myregistry/myapp:1.0.0
# 3. On server, pull image
docker pull myregistry/myapp:1.0.0
# 4. Run container
docker run -d -p 3000:3000 myregistry/myapp:1.0.0
Continuous Deployment with Docker
Automate the build and deployment pipeline:
# GitHub Actions example
name: Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Build image
run: docker build -t myapp:${{ github.sha }} .
- name: Push to registry
run: |
docker login -u ${{ secrets.DOCKER_USERNAME }} -p ${{ secrets.DOCKER_PASSWORD }}
docker tag myapp:${{ github.sha }} myapp:latest
docker push myapp:latest
- name: Deploy to server
run: |
ssh user@server "docker pull myapp:latest && docker-compose up -d"
Monitoring and Debugging
Viewing Container Metrics
Monitor container resource usage:
# View container stats
docker stats
# Show container information
docker inspect container-name
# View container processes
docker top container-name
Debugging Inside Containers
Execute commands inside running containers:
# Open shell in running container
docker exec -it container-name /bin/sh
# Run command
docker exec container-name npm test
# View file contents
docker exec container-name cat /app/config.json
Conclusion: Containers as the Standard
Docker and containerization have become industry standard. Every major platform supports containers. Learning Docker fundamentals is essential for modern development. Start by containerizing a simple application, then expand to multi-container setups with Docker Compose. Once comfortable, explore orchestration platforms like Kubernetes for large-scale deployments.
Containers solve the "it works on my machine" problem, enable consistent deployments, and simplify DevOps workflows. That's why they're everywhere.
Ready to Master Containerization?
Connect with me on LinkedIn to discuss Docker, DevOps, and deployment strategies.