From Docker To Kubernetes

From Docker To Kubernetes

Reading time1 min
#Cloud#DevOps#Containers#Docker#Kubernetes#Containerization

Mastering the Leap: Transitioning from Docker to Kubernetes Without Losing Control

Application teams relying on Docker frequently encounter scalability and reliability limits once environments move beyond a single host. Orchestrating dozens of containers? Quickly turns messy—manual restarts, upgrades, monitoring gaps. Kubernetes solves for this, but only if you plan migrations to avoid rewriting everything that already works.

Kubernetes is more than a scheduler. It’s a declarative API for distributed systems: managing cluster state, healing nodes, self-service deployments, RBAC—the works. But bringing old “docker run” habits straight into Kubernetes is a recipe for frustration. This guide isolates critical steps: what to adapt, when to automate, and which trade-offs show up in real migrations.


Beyond Docker: Where Control Slips

Docker's strengths remain clear:

  • Consistent local test and build environments (Dockerfile calories matter here—don’t ignore image size).
  • Rapid feedback cycles during development.
  • Lightweight container management, especially on a developer laptop.

The catch: deploy 5+ interacting services, then try to recover from a node failure or implement rolling updates by hand. You’ll immediately hit limits. Some classic flags:

  • Ad-hoc network bridges can’t substitute for real service discovery.
  • No built-in auto-scaling or cluster-level scheduling.
  • Upgrades = docker stop + docker pull + hope.

Kubernetes supplies these missing primitives. Notably:

  • Controllers for pod scheduling/horizontal scaling.
  • Integrated liveness/readiness checks (probe failures cause restarts).
  • Managed load balancing with dynamic Service endpoints.

For any org running multi-tier or microservices architecture, these capabilities become non-negotiable.


First: Harden Your Docker Practices

Review foundational Docker workflow before making orchestration decisions.

Practical checklist:

  • Minimal base images. Avoid general-purpose images—node:18-alpine, python:3.12-slim, golang:1.20-alpine beat their heavier counterparts.
  • Stateless execution. Any persistent state must be stored on volumes or off the instance. Rebuilds, reschedules, or pod evictions will otherwise cause data loss.
  • Well-defined port exposure. If your Dockerfile uses EXPOSE 8080 but your configuration actually binds to 8000, expect outages; align these explicitly.
  • Runtime configuration via environment variables. Never hardcode secrets or configs; support overrides.

Sample Node.js container, production-grade:

FROM node:18-alpine

WORKDIR /srv/app

COPY package.json yarn.lock ./
RUN yarn install --production --frozen-lockfile

COPY . .

ENV NODE_ENV=production

EXPOSE 3000

HEALTHCHECK --interval=30s --timeout=3s --start-period=5s CMD node healthz.js || exit 1

CMD ["node", "server.js"]

Tip: Use docker scan or Trivy for image vulnerability checks—better to adapt early than retrofit upstream security later.


Compose: The Stepping Stone

Porting a stack from Docker Compose can provide conceptual continuity while bridging to Kubernetes.

Why not jump to full Kubernetes YAML at once? Compose is simpler syntax and helps clarify actual service interactions, which often get muddied in K8s custom resources and manifests.

Migration flow:

  1. Assemble all services in a single docker-compose.yaml.

  2. Validate under docker compose up --build.

  3. Use Kompose for an initial translation:

    kompose convert -f docker-compose.yaml
    

Example Compose (abbreviated):

version: "3"
services:
  web:
    image: myorg/web:1.1.2
    environment:
      - PORT=8080
    ports:
      - "8080:8080"
    depends_on:
      - db
  db:
    image: postgres:15-alpine
    volumes:
      - pg_data:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=securepass
volumes:
  pg_data:

kompose will map these to Deployment, Service, and PersistentVolumeClaims. Results are not production-optimized—update resource limits, add probes, and lock images to tags, not latest.

Known issue: Kompose sometimes mislabels ports or misconfigures persistent volumes. Always review output before applying.


Crosswalk: Docker vs Kubernetes Objects

Common translation pitfalls:

DockerKubernetesPractical Guide
docker runPod/Deployment/JobMost apps → Deployment; cron → CronJob
Docker ComposeDeployment + Service + PVCCompose networks/services ≈ K8s Services
Volume mountPersistentVolumeClaimStorage classes/persistent volume provisioning
Port PublishingService (Type: ClusterIP/NodePort/LB)NodePort/LoadBalancer required for external
ENVenv array in specConfigMaps for bulk/non-secret, Secrets for sensitive

Note: Not every concept transfers 1:1. Docker swarm networks differ from K8s service discovery—test before assuming equivalence.


Proof of Concept: Single Workload to Kubernetes

Example deployment for the Node.js app, two replicas, with environment variable and HTTP service—minimal but extendable.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-sample
spec:
  replicas: 2
  selector:
    matchLabels:
      app: node-sample
  template:
    metadata:
      labels:
        app: node-sample
    spec:
      containers:
      - name: node
        image: ghcr.io/myorg/node-app:v1.3.4
        env:
        - name: NODE_ENV
          value: production
        ports:
        - containerPort: 3000
        resources:
          requests:
            cpu: "100m"
            memory: "64Mi"
          limits:
            cpu: "500m"
            memory: "256Mi"
        livenessProbe:
          httpGet:
            path: /health
            port: 3000
          initialDelaySeconds: 15
          periodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
  name: node-public
spec:
  type: ClusterIP
  selector:
    app: node-sample
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000

Apply:

kubectl apply -f node-sample.yaml
kubectl rollout status deployment/node-sample
kubectl port-forward svc/node-public 8080:80

Gotcha: If you use type: LoadBalancer locally (e.g. with minikube), ensure you enable its tunnel or fallback to NodePort for public routing.

Check pod events; real errors look like:

Warning  Failed     1m    kubelet  Failed to pull image "ghcr.io/myorg/node-app:v1.3.4": rpc error: code = NotFound

Source Control Everything

Kubernetes’ entire premise is declarative state—the cluster matches your YAML/Git, not the other way around.

  • Store all manifests in a repository.
  • Use kubectl diff -f <manifest> before rollouts.
  • Adopt CI/CD for staging vs production divergence tracking.

Consider using Kustomize or Helm for templating once config duplication becomes a maintenance burden.


Gradual Expansion, Not a Big Bang

Don’t migrate every service at once. Start with non-critical or stateless workloads. Early victories expose platform bottlenecks—storage drivers, ingress controller support, or resource requests that starve low-priority workloads.

Monitoring: Layer in Prometheus/Grafana for metrics. At minimum, use kubectl top pods/kubectl logs for baselining.

Scaling tip: Test kubectl scale deployment/node-sample --replicas=4 and simulate pod failures to see self-healing in practice.

Known issue: Stateful services (databases) bring extra complexity—the CSI driver, PVC retention policies, and crash recovery. Unless your team has strong K8s experience, keep initial migrations stateless.


Summary

Docker handled initial growth, but at scale, operational headaches multiply. Kubernetes enables controlled expansion—abstracting failures, deployments, and service management—if migration is deliberate.

Harden Docker workflows, use Compose-to-K8s translation to clarify system boundaries, and don’t expect direct analogues for every pattern. Start migrations small, audit carefully, and keep configs under source control. There’s no finish line—only continuous improvement and increased system reliability.


Want a follow-up on Helm charts, Ingress, or advanced CI/CD pipeline design for Kubernetes? Leave specifics—and expect a response based on what actually works, not just what’s trendy.