Add Docker Container To Kubernetes

Add Docker Container To Kubernetes

Reading time1 min
#Cloud#DevOps#Containers#Kubernetes#Docker#Containerization

Deploying a Docker Container on Kubernetes: Direct, Reliable Methods

Running code in a local Docker environment is trivial. Moving that container into a Kubernetes-managed cluster—while preserving reliability, scalability, and maintainability—calls for a few non-negotiable steps. Skip these, and scaling or debugging becomes difficult.

Core Prerequisites

  • Docker Engine (tested with v24.x) with your application image built locally or hosted in a registry (Docker Hub, Amazon ECR, GitHub Container Registry, etc.).
  • Kubernetes cluster: Minikube (v1.32+), Kind, or production environments (EKS/GKE/AKS).
  • kubectl CLI configured for the above cluster (kubectl version --short should show a connected server/client pair).

Example Application: Node.js HTTP Server

Consider a minimal Node.js HTTP server as the workload.

app.js:

const http = require("http");
const PORT = 3000;
http.createServer((_, res) => res.end("Hello from container!")).listen(PORT);

Side note: Node.js v14 is reaching end-of-life. For production, use node:18-alpine or higher.

Dockerfile:

FROM node:18-alpine
WORKDIR /srv
COPY package*.json ./
RUN npm install --omit=dev
COPY . .
EXPOSE 3000
CMD ["node", "app.js"]

Build and push (assumes Docker Hub):

docker build -t johndoe/hello-node:1.0 .
docker push johndoe/hello-node:1.0

If cluster nodes can’t access Docker Hub directly, consider using a local registry or pre-pulled images.


Kubernetes Deployment YAML

The deployment controller ensures the desired replica count and restarts failed pods automatically. Here’s a minimal manifest, annotated for clarity.

deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-node
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-node
  template:
    metadata:
      labels:
        app: hello-node
    spec:
      containers:
        - name: hello-node
          image: johndoe/hello-node:1.0
          ports:
            - containerPort: 3000
          resources:
            requests:
              memory: "64Mi"
              cpu: "100m"
            limits:
              memory: "128Mi"
              cpu: "200m"
          readinessProbe:
            httpGet:
              path: /
              port: 3000
            initialDelaySeconds: 2
            periodSeconds: 10

Key additions:

  • Resource requests and limits (often overlooked in test deployments).
  • Readiness probe for more robust rollout—prevents sending requests to unready pods.

Exposing the Application via Kubernetes Service

Pods receive dynamic IPs. To expose port 3000 cluster-wide or externally, use a Service. Avoid NodePort in production unless absolutely necessary.

service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: hello-node
spec:
  selector:
    app: hello-node
  ports:
    - port: 3000
      targetPort: 3000
      protocol: TCP
      nodePort: 32000  # Optional, used for minikube/kind only
  type: NodePort

Note: For EKS/GKE/AKS, change type: NodePort to type: LoadBalancer. Avoid exposing high ports unless you control firewall rules.


Deployment and Validation Steps

Apply manifests:

kubectl apply -f deployment.yaml
kubectl apply -f service.yaml

Pod status tracking:

kubectl get deploy hello-node
kubectl get pods -l app=hello-node -o wide
kubectl get svc hello-node

If stuck in ImagePullBackOff, double-check your registry credentials or image name.


Accessing the Application

For Minikube:

minikube service hello-node --url

Sample output:

http://127.0.0.1:32000/

Alternatively, for any cluster:

kubectl port-forward svc/hello-node 8080:3000
curl http://localhost:8080/

Output should read:
Hello from container!


Extra Considerations: Production Hardening

  • Rolling Update: Modify the image tag/version in the deployment, re-apply, and observe zero-downtime updates.
  • Namespacing: Isolate with metadata.namespace: staging—do not use default for nontrivial applications.
  • Health Checks: Readiness and liveness probes prevent bad rollouts and enable fast failure detection.
  • Resource Quotas: Set and enforce limits to avoid “noisy neighbor” issues, especially in multi-tenant clusters.
  • ImagePullPolicy: For development, use IfNotPresent. For production, use Always if you push tag-latest images.

Known Issues and Gotchas

  • Default Service types (NodePort/ClusterIP) can expose your app unintentionally if firewall misconfigured.
  • Deploying with local-only images (not pushed to registry) on remote clusters will fail silently (ErrImagePull).
  • Not all Dockerisms translate: Kubernetes passes environment in pod spec, not via docker run -e.

Summary Table: Core Steps

TaskCommand/Resource
Build & Push Imagedocker build/push
Write Deployment YAMLdeployment.yaml
Write Service YAMLservice.yaml
Deploy to Kuberneteskubectl apply -f
Access Service (minikube)minikube service ...
Validate Logskubectl logs <pod>

Migrating a container from local Docker to Kubernetes is less about new tools and more about declarative configuration, visibility, and explicit management of service exposure and resources. Little details, such as readiness probes or image pull policies, create the difference between a weekend test and a cluster that runs for years.

For more complex orchestration (e.g., Helm, GitOps, automated image scanning), build on top of this foundation. Mistakes escalate at scale; get the basics right.

Got questions about secrets management, rolling out blue/green deployments, or automating manifests? There are several approaches—Helm charts, Kustomize overlays, progressive delivery controllers like Argo. Each brings trade-offs in complexity, auditability, and team workflow.


Nothing is ever "seamless"—but this workflow is as close as you'll get for direct container-to-Kubernetes migration.