Mastering Production-Grade Docker Container Deployment on Linux
It’s trivial to launch a container with docker run
. Keeping that container alive, performant, and observable in production is another story. The gulf between hobbyist and robust deployment lies in the details—restart policies, persistent storage, controlled resource usage, and systemic integration. Below, a field-tested deployment pipeline for Dockerized applications on Linux servers.
Standard docker run
Is Not Production. Why?
A typical local run—
docker run -d myapp:latest
—will get your code up, but leaves the following hazards unaddressed:
- No automated recovery (process crash, host reboot)
- Unchecked resource consumption (OOM kills, CPU stall)
- No persistent state (logs/db/data lost after recreate)
- Lacking integration with monitoring/alerting stack
- Opaque networking (service isolation, egress policies)
A responsible deployment addresses these systematically.
Server Prep: Harden and Stage the Linux Host
Update, patch, and baseline.
Outdated hosts create attack surface and operational drift.
sudo apt-get update && sudo apt-get upgrade -y
Clean and install Docker (Ubuntu 22.04+ recommended):
sudo apt-get remove docker docker-engine docker.io containerd runc 2>/dev/null
sudo apt-get install -y ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce=5:24.0.7-1~ubuntu.22.04~jammy docker-ce-cli=5:24.0.7-1~ubuntu.22.04~jammy containerd.io
Group permissions:
Add your user to the docker
group if you require passwordless operation— but know the security trade-off.
sudo usermod -aG docker $USER
newgrp docker
Note: On shared hosts, avoid blanket Docker group membership; it essentially gives root privileges.
Image Hygiene: Build for Production, Not for Dev
Production images must balance security, size, and reproducibility.
Minimal, version-pinned base images.
Multi-stage build pattern (Node.js, example):
FROM node:18.19.1-alpine AS builder
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm ci --omit=dev
COPY . .
FROM node:18.19.1-alpine
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app .
USER node
CMD ["node", "server.js"]
Build with explicit tags (no “:latest” in prod):
docker build -t myregistry.local/myapp:1.0.2 .
Gotcha: Always scan third-party images for known CVEs using docker scan
or Trivy.
Volumes: Durable State Beyond Ephemeral Containers
Stateless is a myth for anything with logs, cache, or user uploads.
docker volume create appdata-prod
docker run -d \
--name myapp \
-v appdata-prod:/var/lib/myapp/data \
myregistry.local/myapp:1.0.2
Note: Avoid direct host path binds unless you need to interoperate tightly with host-based backup or file monitoring.
Container Lifecycle: Reliability via Restart Policy
Use restart policies to ensure containers recover from failures or reboots.
Policy | Use case |
---|---|
no | Dev only |
on-failure[:N] | Recovery from transient app bugs |
always | Continuous background tasks/services |
unless-stopped | Most production web apps |
Example:
docker run -d \
--name myapp \
--restart unless-stopped \
myregistry.local/myapp:1.0.2
Enforce Resource Discipline
Linux servers often run multiple workloads; unbounded containers create noisy-neighbor and denial scenarios.
docker run -d \
--name myapp \
--memory="512m" \
--cpus="1.0" \
myregistry.local/myapp:1.0.2
Keep monitoring actual usage. There’s no substitute for live metrics tracebacks when diagnosing cgroup kills:
kernel: Memory cgroup out of memory: Kill process 2034 (node) score 199 or sacrifice child
Networking: Isolation and Connectivity
Beyond “ports open to the world”, design deterministic traffic flows.
User-defined bridge networks:
docker network create --driver bridge appnet
docker run -d --name db --network appnet postgres:15.6-alpine
docker run -d --name api --network appnet myregistry.local/api:1.2.0
Service discovery via container names (db:5432
from api
).
Expose only required ports. For north-south traffic, standardize on reverse proxies (Traefik/nginx). Layer security with firewalls (ufw/iptables) and, when relevant, mutual TLS between services.
Configuration: Environment Variables and Secrets
Environment variables:
Pass non-sensitive config (e.g., environment, endpoints) at runtime.
docker run -d \
-e NODE_ENV=production \
-e DB_HOST=db \
myregistry.local/myapp:1.0.2
Secrets:
Avoid environment variables for passwords/API keys in production. Prefer Docker Swarm secrets or external vaults (HashiCorp Vault, AWS Secrets Manager):
- No native secrets in plain Docker Engine; use file mounts or external agents.
Multi-Container Patterns: When docker-compose
Works
Single-host, multi-service applications benefit from Compose. Not orchestration, but sufficient up to a point.
Example minimal docker-compose.yml
:
version: '3.8'
services:
db:
image: postgres:15.6-alpine
volumes:
- db-data:/var/lib/postgresql/data/
environment:
POSTGRES_PASSWORD: strongpassword
web:
image: myregistry.local/myapp:1.0.2
ports:
- "80:3000"
depends_on:
- db
environment:
DB_HOST: db
restart: unless-stopped
volumes:
db-data:
Deploy:
docker compose -f /etc/myapp/docker-compose.yml up -d
Known issue: Older Compose files lack per-service resource limits; service-level control requires Compose v3.4+.
Reliability: Systemd Integration
Plain Docker daemons don’t guarantee container start after host reboot. Offload to systemd for predictable orchestration.
Sample systemd unit /etc/systemd/system/myapp.service
:
[Unit]
Description=MyApp Production Container
After=docker.service
Requires=docker.service
[Service]
Restart=always
ExecStartPre=/usr/bin/docker pull myregistry.local/myapp:1.0.2
ExecStart=/usr/bin/docker run --rm --name myapp \
--restart unless-stopped \
-v appdata-prod:/var/lib/myapp/data \
myregistry.local/myapp:1.0.2
ExecStop=/usr/bin/docker stop myapp
[Install]
WantedBy=multi-user.target
Reload and enable:
sudo systemctl daemon-reload
sudo systemctl enable myapp
sudo systemctl start myapp
Tip: For Compose stacks, use docker-compose@.service
templates or wrapper scripts.
Summary Table: Key Practices For Production Deployment
Area | Recommendations |
---|---|
Image | Minimal, pinned versions, scan for CVEs |
Storage | Named Docker volumes |
Restart | Use robust policies (unless-stopped ) |
Resources | Explicit --memory and --cpus flags |
Networking | User-defined bridges + reverse proxy |
Config | Env vars and secret managers |
Monitoring | Forward logs, add healthchecks |
Orchestration | Compose/Swarm/Kubernetes as required |
Noteworthy Details
- Docker logging: Forward logs to journald or external aggregators (Filebeat/Logstash) via
--log-driver
flags. - Healthcheck: Add
HEALTHCHECK
directives in Dockerfiles to enable orchestration of container health. - Image scan: Integrate Trivy or similar scanners in CI/CD.
There’s no universal recipe. Risk profiles, compliance requirements, and team maturity all affect the chosen stack. Unless you’re constrained, invest early in observability and secrets management—even at single-host scale.
For distributed orchestration or hybrid cloud, Kubernetes enters the discussion. But for many, disciplined Docker use on Linux is sufficient—and far less operational drag.