Docker to Kubernetes: A Practical Engineer’s Migration Field Guide
Over time, the limits of single-node Docker become impossible to ignore. Manual container launches, bespoke shell scripts for scaling, and unpredictable networking—sooner or later, orchestration isn’t a choice but a necessity. Here’s how that transition actually unfolds for working developers.
Problem: Scaling Beyond Docker
Scenario: A team has a Node.js REST API containerized for local development. It’s deployed on a handful of VMs with docker run
. Now, requirements include:
- Rolling out across three availability zones
- Zero-downtime updates
- Centralized config, separate from application code
- Autoscaling based on incoming traffic
Docker alone can't handle service discovery, rolling upgrades, or autoscaling. Ad-hoc solutions break at scale, and the operational overhead balloons.
Kubernetes solves this with robust primitives for deployment, configuration, and health management.
Core Kubernetes Concepts—No Fluff
- Pod: Single-tenant wrapper for one or more tightly-coupled containers (think one per microservice or process). Ephemeral by design.
- Deployment: Declaratively manages a ReplicaSet, ensuring a specified number of Pods are running; supports rolling updates and rollbacks.
- Service: Stable virtual IP (ClusterIP/NodePort/LoadBalancer) that abstracts access to a set of Pods; handles load balancing out of the box.
- ConfigMap/Secret: Mount runtime configuration or credentials; decouples environment from image.
Note: Pods are not immortal—Kubernetes can and will reschedule them after failures, so never store state locally inside Pods without a PersistentVolume.
Local Cluster: Minikube (v1.32+) and kubectl (v1.29+)
Spin up a cluster with:
minikube start --kubernetes-version=v1.29.0
kubectl version --client
Check cluster health:
kubectl get nodes
kubectl get componentstatuses
Watch for issues like CrashLoopBackOff
in pod status—common with misconfigured images or missing config.
Migrating a Dockerized Node.js Service
Typical Docker usage:
FROM node:20-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm ci --omit=dev
COPY . .
CMD ["node", "index.js"]
HEALTHCHECK --interval=10s --timeout=3s CMD wget -q --spider http://localhost:3000/ || exit 1
EXPOSE 3000
Locally:
docker build -t acme/node-api:2.3 .
docker run -p 3000:3000 acme/node-api:2.3
Registry Required
Kubernetes needs an image in a reachable registry. For just local dev with minikube, use:
eval $(minikube docker-env)
docker build -t node-api:dev .
But for real clusters, publish to Docker Hub/ECR/GCR:
docker tag acme/node-api:2.3 myrepo/node-api:2.3
docker push myrepo/node-api:2.3
From docker run to Deployment YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-api
labels:
app: node-api
spec:
replicas: 3
selector:
matchLabels:
app: node-api
template:
metadata:
labels:
app: node-api
spec:
containers:
- name: node
image: myrepo/node-api:2.3
ports:
- containerPort: 3000
envFrom:
- configMapRef:
name: node-api-config
readinessProbe:
httpGet:
path: /healthz
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /healthz
port: 3000
initialDelaySeconds: 15
periodSeconds: 20
readinessProbe
/livenessProbe
replace Docker's HEALTHCHECK.- Environment config is injected via ConfigMap—see below.
Apply:
kubectl apply -f deployment.yaml
Monitor rollout:
kubectl get deployment node-api
kubectl rollout status deployment/node-api
If pods don't become READY
, run:
kubectl describe pod <pod-name>
kubectl logs <pod-name>
Watch for permission errors, missing config, or image pull failures.
Publishing a Service
apiVersion: v1
kind: Service
metadata:
name: node-api-svc
spec:
selector:
app: node-api
type: NodePort # Not for production clouds, use LoadBalancer or Ingress
ports:
- protocol: TCP
port: 80
targetPort: 3000
nodePort: 31300 # Minikube: range 30000–32767
Apply:
kubectl apply -f service.yaml
minikube service node-api-svc --url
Alternative:
curl $(minikube ip):31300/healthz
Note: For production, expose via Ingress + cert-manager for TLS.
Config and Secrets—No More Hardcoding
Sample configmap.yaml
:
apiVersion: v1
kind: ConfigMap
metadata:
name: node-api-config
data:
NODE_ENV: production
API_DEBUG: "false"
Add to Deployment spec:
envFrom:
- configMapRef:
name: node-api-config
For secrets (e.g. API tokens):
apiVersion: v1
kind: Secret
metadata:
name: node-api-secrets
type: Opaque
data:
TOKEN: bXlTdXBlclNlY3JldA== # echo -n "mySuperSecret" | base64
Mount in Deployment:
envFrom:
- secretRef:
name: node-api-secrets
Caution: Kubernetes secrets are only base64-encoded, not encrypted unless you enable EncryptionConfiguration on the API server.
Effortless Scaling
Scaling up means changing a single number—not writing scripts:
kubectl scale deployment/node-api --replicas=8
kubectl get pods -l app=node-api
Instant elasticity, provided enough cluster resources exist.
Troubleshooting: What Breaks (and Why)
- Forgot to push image to registry →
ImagePullBackOff
- Port mismatch (service
targetPort
≠ containerPort) → connection refused - Readiness probe failing → pods never receive traffic
- Misconfigured ConfigMap name → containers crashloop on missing env vars
Example log snippet:
Error: Missing NODE_ENV variable
npm ERR! code ELIFECYCLE
Non-Obvious Tips
- For faster local development, use Skaffold or Tilt to automate build/deploy cycles.
- To convert
docker-compose.yaml
for non-trivial services, trykompose
, but review generated manifests—often need manual edits, especially for persistent storage. - Minikube’s disk-backed PersistentVolume is not production-grade; in the cloud, use managed storage classes.
Summary
Shipping containers via Docker is the easy part. Running them reliably, securely, and at scale requires a scheduler—Kubernetes provides that abstraction, but demands a different way of thinking.
Migrating isn’t just syntax or tool swaps; expect refactoring of configuration, monitoring, and CI/CD pipelines. Critically, ensure all config and credentials are decoupled from your images.
Kubernetes looks complex at first glance but addresses operational pain points born from scaling up. The real challenge isn’t complexity—it’s unlearning fragile Docker-era habits.