Best Way To Deploy Docker Containers

Best Way To Deploy Docker Containers

Reading time1 min
#Docker#DevOps#Containers#Immutable#Kubernetes#CI/CD

Mastering Docker Container Deployment: Build Reliable Pipelines with Immutable Images and Automated Rollback

Anyone supporting production workloads has probably encountered the “works on my machine” syndrome and the downtime that follows a problematic release. Root cause: stateful, mutable deployments that drift over time and roll back poorly under pressure. Standing up robust Docker deployments means aligning every release behind repeatable, immutable infrastructure and building in safety nets for rapid failure recovery.


Immutable Infrastructure: No Room for Drift

Patch servers, tweak containers in-place, or deploy with the latest tag—expect entropy, inconsistency, and surprises on Friday afternoons. Immutable infrastructure removes these variables. Every node or container is a disposable asset, rebuilt cleanly from a versioned artifact for each deployment.

Key benefits:

  • Bit-for-bit repeatability: If it works in CI, it’s what you get in prod.
  • Rollback by replacement: Swap the running image tag; nothing “undoes” in production.
  • Horizontal scaling: No warm-up drift—autoscale with predictable state.

Example:

A deployment pipeline pushes myapp:1.17.4-6f8e6a2, never mutating the tag or rebuilding after QA.


Immutable Build Artifact: The Foundation

CI/CD pipelines handle build-to-push; you never want to build images on production nodes. Here’s a common workflow for an application, assuming use of GitHub Actions and Docker 24.x+:

# Build an image with a unique tag
docker build --platform linux/amd64 \
  -t registry.example.com/myapp:1.4.2-$(git rev-parse --short HEAD) .

# Local test run to catch integration breakage
docker run --rm registry.example.com/myapp:1.4.2-abe12cd pytest tests/

# Push to registry (assume prior login)
docker push registry.example.com/myapp:1.4.2-abe12cd

Tip: prefer git commit hashes as tags over “latest”. Anyone rolling back must know exactly which image is deployed. If you use semantic versions, append the hash (1.4.2-abe12cd).


Deployment: Replace, Don’t Update

In-place mutation—e.g., docker exec to patch running containers, or retagging “latest”—is error-prone and non-auditable. For simple stacks, even docker-compose can uphold immutability:

version: '3.8'
services:
  web:
    image: registry.example.com/myapp:1.4.2-abe12cd
    ports: ["80:8080"]

Upgrade? Edit the tag, redeploy:

docker-compose pull web
docker-compose up -d web

For modern clusters, orchestrators like Kubernetes are built around stateless, disposable pods. Your Deployment (or Helm chart) references explicit, versioned images:

spec:
  containers:
    - name: app
      image: registry.example.com/myapp:1.4.2-abe12cd

Zero-downtime rollout? Use strategies like blue/green or canary. For critical systems, never flip 100% of traffic immediately.


Automated Rollback: Engineer Away Human Error

Rolling forward is easy—rolling back is where most shops stumble. Kubernetes and Docker Swarm both provide mechanisms for rapid, safe fallback:

Kubernetes rollback:

kubectl set image deployment/myapp app=registry.example.com/myapp:1.4.2-abe12cd
kubectl rollout status deployment/myapp

# If probe fails or bug escapes:
kubectl rollout undo deployment/myapp
  • Fine-grained probes (readinessProbe, livenessProbe) define health.
  • Rollback triggers when rollout stalls or probes fail within timeout.

Docker Swarm: (rarely first choice for new projects, but legacy exists)

docker service update \
  --image registry.example.com/myapp:1.4.2-abe12cd \
  --rollback-delay 15s \
  --rollback-max-failure-ratio 0.2 \
  --update-parallelism 2 \
  my_app

Swarm will revert to previous images if task failures exceed thresholds.

Note: Rollbacks are only as fast as your monitoring/probes. An untested readiness script can silently green-light a broken app.


Pipeline Integration: Real-World Example

CI/CD pipelines should enforce all stages: build, test, tag, push, deploy, probe, rollback. Consider a snippet from a GitHub Actions workflow deploying to Kubernetes using kubectl 1.29 and Docker 24.x:

jobs:
  release:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Build & Push Image
        run: |
          export SHA=$(git rev-parse --short HEAD)
          docker build -t registry.example.com/myapp:${SHA} .
          docker push registry.example.com/myapp:${SHA}
      
      - name: Deploy to Kubernetes
        env:
          KUBECONFIG: ${{ secrets.KUBECONFIG }}
        run: |
          kubectl --kubeconfig="$KUBECONFIG" set image deployment/myapp \
            app=registry.example.com/myapp:${SHA}
          kubectl --kubeconfig="$KUBECONFIG" rollout status deployment/myapp --timeout=90s

      - name: Post-deploy smoke test
        run: |
          set -e
          ./tests/smoke.sh || kubectl --kubeconfig="$KUBECONFIG" rollout undo deployment/myapp

Practical gotcha: Many pipelines skip smoke tests post-deploy—then rollbacks surface for real users. Always probe in-cluster before declaring success.


Non-Obvious Tip: Registry Retention and Tagging

Large registries with ephemeral builds (e.g., per-PR builds) will bloat unless you set strict retention policies. Clean up unused tags, but never garbage-collect tags still deployed—Kubernetes and Swarm will fail to pull missing images, killing rollback.


Limitations and Alternatives

  • Not all workloads are stateless; database migrations still require special orchestration.
  • Helm and Kustomize add flexibility, but can complicate rollbacks if not version-controlled in lockstep with images.
  • Some teams prefer GitOps (e.g., Flux or ArgoCD)—full declarative state, but occasionally slow to react to urgent rollbacks.

Closing Gaps: Summary

Replace mutable deployments with explicitly versioned, immutable artifacts. Wire up probe-based automated rollbacks—test these routinely, not just hypothetically. Documentation should always reference image tags, rollback commands, and health probe design. Where possible, keep state out of containers; for stateful workloads, invest additional engineering in coordinated rollbacks.

Never trust “latest”. Deploy what you’ve tested, always by version.


Questions about multi-arch builds, Helm rollbacks, or integrating with Vault-secrets at deploy time? Reach out. Over-engineering for reliability isn’t a luxury—when that Friday push goes sideways, it’s what keeps you online.