Deploying Docker Containers to Kubernetes: A Pragmatic Guide
Deploying code with docker run
is fast—until you need self-healing, autoscaling, or zero-downtime updates. Kubernetes addresses these at scale, yet bridging from Docker workflows to production-grade clusters often gets lost between classroom YAMLs and real infrastructure pitfalls. Below is a direct process for getting a Dockerized Python web app into Kubernetes, avoiding both handwaving and overengineering.
Docker is Not Enough
A single-container workflow—running docker run -p 5000:5000 flask-k8s-demo:v1
on a local machine—works for initial iterations. However, production requires replicas, automated rollout strategies, service discovery, and health checks. Kubernetes delivers these primitives natively:
Requirement | Docker Standalone | Kubernetes |
---|---|---|
Replicas | Manual scripting | replicas: field |
Recovery | Ad hoc restarts | Self-healing |
Upgrades | Manual, error-prone | Rolling updates |
Service | Hardcoded host/port | DNS + Service |
Prerequisites
- Docker image: Built and (preferably) hosted in a public/private registry
- Kubernetes cluster: At least v1.24+.
- kubectl: Properly configured context
- Basic familiarity: Can run
kubectl get pods
and interpret output
Note: For isolated testing,
minikube
suffices on macOS 13+, Linux, or Windows with Hyper-V. Adjust for Mac M1/M2: not all Docker base images run natively.
Build the Docker Image
Suppose the app code is saved as app.py
(Flask 2.2.x for this example):
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello from Flask running in Docker on Kubernetes!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)
Minimal Dockerfile (no .dockerignore
for this demo):
FROM python:3.9.18-slim
WORKDIR /app
COPY app.py .
RUN pip install flask==2.2.5
CMD ["python", "app.py"]
Build and (optionally) push:
docker build -t demo/flask-k8s-demo:v1 .
# Tag for Docker Hub:
docker tag demo/flask-k8s-demo:v1 your-dockerhub-username/flask-k8s-demo:v1
docker push your-dockerhub-username/flask-k8s-demo:v1
Private registries will require Docker pull secrets in Kubernetes—out of scope here but crucial for real deployments.
Kubernetes Deployment: YAML Manifest
Below is deployment.yaml
—tune replicas:
as required.
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-k8s-demo
spec:
replicas: 3
selector:
matchLabels:
app: flask-k8s-demo
template:
metadata:
labels:
app: flask-k8s-demo
spec:
containers:
- name: flask
image: your-dockerhub-username/flask-k8s-demo:v1
ports:
- containerPort: 5000
resources:
requests:
cpu: "50m"
memory: "128Mi"
limits:
cpu: "250m"
memory: "256Mi"
livenessProbe:
httpGet:
path: /
port: 5000
initialDelaySeconds: 5
periodSeconds: 10
Gotcha: Liveness/readiness checks are not optional—without them, Kubernetes cannot distinguish between “temporary hiccup” and “dead container”.
Exposing the Service: Internal vs. External
To publish the Flask service, you need a Service resource. Use type: LoadBalancer
if your cluster supports it (common on EKS, GKE, AKS). Otherwise, fall back to NodePort
.
service.yaml
(external access, HTTP on port 80):
apiVersion: v1
kind: Service
metadata:
name: flask-k8s-svc
spec:
type: LoadBalancer # NodePort on minikube
selector:
app: flask-k8s-demo
ports:
- protocol: TCP
port: 80
targetPort: 5000
Minikube quirk: Replace LoadBalancer
with NodePort
and access via minikube service flask-k8s-svc --url
.
Apply Manifests
Strict order matters for dependencies—deploy, then service:
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
To verify:
kubectl get deployments
kubectl get pods -l app=flask-k8s-demo
kubectl get svc flask-k8s-svc
Expect:
NAME READY UP-TO-DATE AVAILABLE AGE
flask-k8s-demo 3/3 3 3 1m
and
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
flask-k8s-svc LoadBalancer 10.0.100.234 <pending|IP> 80:xxxxx/TCP 1m
Accessing Your Application
- Cloud (LoadBalancer): Wait for an external IP:
kubectl get svc flask-k8s-svc --watch
Then openhttp://<EXTERNAL-IP>/
- Minikube/NodePort:
minikube service flask-k8s-svc --url
Paste the resulting URL into your browser.
Known issue: HTTPS is not enabled. For secure traffic, set up Ingress + certificate management (see cert-manager).
Practical Troubleshooting
Symptom | Check/Command | Notes |
---|---|---|
Pods CrashLoopBack | kubectl logs <pod> | Python traceback or “ModuleNotFound” often due to dependency mismatch. |
Image not found | Inspect pullSecrets, repo visibility | Private repos require imagePullSecrets. |
Hanging on rollout | kubectl describe deployment <name> | Unready containers or bad healthchecks delay rollouts. |
Service no IP | NodePort works everywhere; LoadBalancer only on cloud setups. |
Example error (common YAML mistake):
error: error validating "deployment.yaml": error validating data: ValidationError(Deployment.spec): missing required field "selector"
Edit manifest—do not just use online generators. Kubernetes is strict on schema.
One Non-Obvious Tip
Label everything (pods, services, even ConfigMaps) using a consistent app:
and env:
pattern. Future automation, metrics, and debugging often hinge on predictable selectors.
metadata:
labels:
app: flask-k8s-demo
env: dev
Summary
Directly lifting Docker containers into Kubernetes is trivial for toy workloads but trivial solutions don’t survive real production. Prioritize:
- Well-defined, versioned OCI images
- Declarative, reviewed manifests
- Probes, resource requests/limits
- Explicit service exposure matching cluster networking
Consider automating build/push and kubectl apply
steps via CI/CD (e.g., GitHub Actions or Jenkins). Next—move to Helm charts for parameterizable deployments, or use Kustomize for overlay management if you’re not ready for Helm’s complexity.
What’s the real stumbling block for your team—YAML maintenance, registry credentials, or cluster networking? If you hit unexpected ImagePullBackOff
loops or flaky traffic, don’t ignore them: solve at the root.