Deploying Directly from Dockerfile to Kubernetes: Practical Workflow
Building, testing, and deploying containerized apps with Kubernetes often involves unnecessary ceremony: multi-stage Dockerfiles, registry uploads, tangled CI pipelines, and sometimes Helm packaging before even a development pod is tested. For rapid validation—especially during early development—a simpler path exists: bypass the registry, skip intricate automation, and deploy local images straight to your Kubernetes cluster.
This workflow suits iterative development and debugging phases. The trade-off: it doesn't scale for production or team collaboration. But for single-developer cycles or local prototyping, there's no substitute for speed.
Workflow Overview
Environment:
- Docker v24+
- Kubernetes in Docker (“kind”) v0.20+
- kubectl v1.27+
- Node.js app for example, but approach is language-agnostic.
ASCII Diagram:
[Dockerfile] --> docker build --> [local image] --+--> kind load docker-image
|
[kind cluster: Pod uses image]
No container registry, no CI pipeline.
1. Author a Minimal Dockerfile
Example: Node.js 14 application (note: node:14-alpine
is EOL in 2023; adjust as needed):
FROM node:18-alpine
WORKDIR /app
COPY package.json .
RUN npm ci
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]
- Note:
npm ci
ensures deterministic installs; prefer overnpm install
for CI/dev repeatability. - Gotcha: Alpine images may require additional system libraries for some npm modules.
2. Build Image Locally
docker build --tag my-node-app:dev .
:dev
tag signals local, non-production image.- Always use explicit tags—no
latest
drift.
3. Load Image into Kind
By default, kind clusters can't see host Docker images. Pushing to a remote registry is possible, but for local workflows:
kind load docker-image my-node-app:dev --name mycluster
- Fails if the cluster is named differently; double-check cluster with:
kind get clusters
4. Pod Manifest referencing Local Image
Create pod.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: my-node-app
spec:
containers:
- name: node
image: my-node-app:dev # Must match loaded tag
ports:
- containerPort: 3000
- Tip: For multiple dev images, prefer unique tags (e.g.,
my-node-app:$(git rev-parse --short HEAD)
).
5. Deploy and Observe
First, ensure the target kind cluster exists:
kind create cluster --name mycluster
- Ignore if already running.
Deploy:
kubectl apply -f pod.yaml
Inspect status:
kubectl get pod my-node-app -o wide
Potential issue: ImagePullBackOff
. This usually means the image is missing, mistagged, or not loaded into the cluster's node.
To diagnose, describe the pod:
kubectl describe pod my-node-app
Common error:
Failed to pull image "my-node-app:dev": rpc error: code = NotFound desc = failed to pull and unpack image: no matching images...
Indicates either the kind load
step was skipped, or the tag is inconsistent.
6. Local Testing (Port Forwarding)
Expose the Pod's port for real testing, not just “it started” checks:
kubectl port-forward pod/my-node-app 8080:3000
- Now
localhost:8080
proxies traffic to your container. - Note:
kubectl port-forward
is ephemeral; use a Service for more robust networking if testing with many containers or integrating locally.
Additional Engineering Considerations
- Skip image push/pull latency: During active development, every second counts. Direct image loading avoids round-trips to a registry.
- Deployment objects: While this example uses a Pod, in practice move to a Deployment for self-healing and rollouts:
apiVersion: apps/v1 kind: Deployment # ...snip...
- Config/Secrets injection: Use ConfigMaps and Secrets rather than hardcoding configuration in images, even during dev cycles.
- Script hot-reloads: Tools like skaffold automate rebuilds/loads/applies on file change, cutting manual repetition:
skaffold dev --port-forward
- Resource quotas: Don’t neglect resource/CPU limits in Pod specs, even locally, to avoid masking OOM scenarios.
Summary
Cutting out registries and heavy CI pipelines, this approach lets you iterate from Dockerfile to Kubernetes pod in under five minutes. It's not suited for production deployment or team workflows, but for fast local feedback, especially when combined with pod log tailing and port-forwarding, it drives high developer efficiency.
Once images stabilize or teamwork begins, wire up a container registry, refactor into Deployments, and layer on Helm or Kustomize for robust, repeatable deployments.
Side note:
For persistent workloads, stateful sets, or complex multi-pod applications, you’ll hit the limits of this workflow fast. Still, it's a valuable pattern for short feedback loops and live debugging of app/server environments.