Deploying Docker Images on Kubernetes: Straightforward Container Orchestration
The distinction between running a container and running a production system emerges only when scale, reliability, and maintenance surface as requirements. Docker handles the former; Kubernetes is built for the latter.
When to Transition from Docker to Kubernetes
A test environment running docker run
is fine—until the app needs scaling, self-healing, or rolling updates. At that point, static containers fall short. Kubernetes (K8s) orchestrates containers across nodes, automates restarts, rollout, resource allocation, and can be CI/CD-driven. Migrating Docker workloads to K8s keeps your container images, but trades single-node simplicity for managed, distributed control.
Example: Node.js App - Container to Cluster
Too much theory obscures the process—a working example clarifies it.
Dockerfile (Node.js 18 LTS)
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --omit=dev
COPY . .
USER node
CMD ["node", "server.js"]
EXPOSE 3000
Note: npm ci --omit=dev
is faster and avoids development bloat in production images.
Build and local test:
docker build -t registry.example.com/demo/node-app:1.0.0 .
docker run -it --rm -p 3000:3000 registry.example.com/demo/node-app:1.0.0
If you see Server listening on port 3000
, the image’s functional.
Push to a Container Registry
Kubernetes cannot pull from your laptop. Use Docker Hub, GitHub Container Registry, or an internal Harbor instance.
docker push registry.example.com/demo/node-app:1.0.0
Tip: Avoid the latest
tag—immutable, versioned tags facilitate rollbacks and deployment traceability.
Cluster Setup
Local: minikube
(v1.33+) or kind
.
Cloud: GKE, EKS, AKS, all require kubeconfig (kubectl version --client
≥ 1.27 recommended).
minikube start --kubernetes-version=v1.28.3
kubectl config use-context minikube
Kubelet logs can diagnose cluster start issues:
minikube logs | grep kubelet
Kubernetes Deployment Manifest
Minimalism aids clarity.
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-app
labels:
app: node-app
spec:
replicas: 2
selector:
matchLabels:
app: node-app
template:
metadata:
labels:
app: node-app
spec:
containers:
- name: node-app
image: registry.example.com/demo/node-app:1.0.0
ports:
- containerPort: 3000
readinessProbe:
httpGet:
path: /healthz
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
resources:
requests:
memory: "128Mi"
cpu: "100m"
Observability matters; without a readinessProbe
, rolling updates can go sideways if an app takes longer than expected to boot.
Deployment
kubectl apply -f deployment.yaml
kubectl rollout status deployment/node-app
If pods fail, inspect event logs:
kubectl describe deployment/node-app
kubectl logs deploy/node-app
Watch for errors like:
Back-off restarting failed container
ImagePullBackOff
Hint: Privately hosted images require proper imagePullSecrets
—a common tripwire on enterprise clusters.
Exposing the Service
For local use, a NodePort or port-forward suffices. Production should use a LoadBalancer
Service or Ingress controller.
apiVersion: v1
kind: Service
metadata:
name: node-app-svc
spec:
type: NodePort
selector:
app: node-app
ports:
- protocol: TCP
port: 80
targetPort: 3000
nodePort: 31080
Apply service:
kubectl apply -f service.yaml
Find the cluster’s external IP (node/Minikube):
minikube service node-app-svc --url
Alternatively, for quick debugging:
kubectl port-forward svc/node-app-svc 3000:80
curl http://localhost:3000/healthz
Non-Obvious Tip: Separate Config from Images
For anything beyond a trivial demo, use a ConfigMap
or Secret
to supply runtime configuration. Hardcoding credentials/env vars in the image leads to brittle deployments and security risk.
apiVersion: v1
kind: ConfigMap
metadata:
name: node-app-config
data:
NODE_ENV: "production"
LOG_LEVEL: "warn"
Attach via the deployment’s envFrom
section.
Gotchas and Pragmatic Advice
- Image tags: Never use mutable tags in CI/CD pipelines unless intentional.
- Resource requests: Overcommit and your scheduler will evict pods. Leave headroom.
- Health probes: Always supply at least
readinessProbe
for HTTP services. - Namespaces: Avoid
default
; create a project- or team-specific namespace. - Network Policies: Isolate traffic by default in production—too open by default invites lateral movement.
Example Error: Image Pull Authentication
Failed to pull image "private.example.com/team/app:2.0.1": rpc error: code = Unknown desc = Error response from daemon: pull access denied
Solution: Create a Secret
of type kubernetes.io/dockerconfigjson
and reference it in the deployment.
Recap
Migrating from Docker alone to Kubernetes orchestration requires minimal changes if you control your images and embrace declarative YAML. The operational benefits (scaling, healing, rolling deploys) vastly outweigh the initial learning. Start with known-good containers, explicit versioning, and the most stripped-down manifests possible. Iterate; don’t over-engineer at the outset.
Note: Helm charts, auto-scaling, and Ingress are natural next steps—but outside the scope here.
For further questions or production-grade cluster design considerations, see the CNCF best practices or open an issue in your team’s internal Knowledge Base.