Add Docker Container To Kubernetes

Add Docker Container To Kubernetes

Reading time1 min
#Cloud#DevOps#Containers#Kubernetes#Docker#Containerization

Mastering the Seamless Transition: Deploying a Docker Container on Kubernetes the Right Way

Forget complex abstraction layers—let's cut through the noise and show you exactly how to add your Docker container to Kubernetes without getting lost in unnecessary tooling or assumptions. This is about mastering the core, real-world workflow, not fluff.

Deploying a Docker container on Kubernetes might sound daunting if you’re used to working with containers locally or on a single system. But understanding this transition is crucial: Kubernetes takes your simple containerized app and transforms it into a scalable, resilient cloud-native deployment that can thrive in production environments.

In this post, I’ll guide you step-by-step through the exact process of deploying your Docker container to Kubernetes. No detours, no complicated abstractions—just practical commands and configuration files you can apply immediately.


Prerequisites

Before we dive in:

  • Have Docker installed and a Docker image built locally (or pushed to a container registry).
  • A running Kubernetes cluster (minikube or kind for local testing, or any managed/live cluster).
  • kubectl CLI installed and configured to connect to your cluster.

Step 1: Build Your Docker Image (If Not Done Yet)

Let’s start with a simple example: a basic Node.js app serving “Hello from container!”

Create app.js:

const http = require('http');
const PORT = 3000;

const server = http.createServer((req, res) => {
  res.end("Hello from container!");
});

server.listen(PORT, () => {
  console.log(`Server running on port ${PORT}`);
});

Create Dockerfile:

FROM node:14-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["node", "app.js"]
EXPOSE 3000

Build the image locally (replace <your-image-name>):

docker build -t <your-image-name>:latest .

If you want others or Kubernetes cluster nodes to access it, push it to a container registry like Docker Hub:

docker tag <your-image-name>:latest <dockerhub-username>/<your-image-name>:latest
docker push <dockerhub-username>/<your-image-name>:latest

Step 2: Writing the Kubernetes Deployment YAML

The Deployment object in Kubernetes manages replica pods running your Docker containers.

Create deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-node-deployment
spec:
  replicas: 3 # Run 3 pods for redundancy and load distribution 
  selector:
    matchLabels:
      app: hello-node
  template:
    metadata:
      labels:
        app: hello-node
    spec:
      containers:
      - name: hello-node-container
        image: <dockerhub-username>/<your-image-name>:latest # Or your image reference here 
        ports:
        - containerPort: 3000

This manifest tells Kubernetes:

  • Run three instances of your container.
  • Match pods using the label app: hello-node.
  • The pod’s container listens on port 3000.

Step 3: Expose Pods via a Service

Pods are ephemeral and hidden behind dynamically assigned IPs. To reach your app from outside the cluster or even within, you create a Service.

For local test with minikube or single-node cluster, use type NodePort. For clouds, use LoadBalancer.

Create service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: hello-node-service
spec:
  selector:
    app: hello-node # Select pods labeled with this app label
  ports:
    - protocol: TCP
      port: 3000       # Service port inside cluster 
      targetPort: 3000 # Pod's exposed port 
      nodePort: 32000  # Opens port on node machine (for NodePort service)
  type: NodePort     # Use LoadBalancer if your cloud supports it 

Step 4: Deploy To Kubernetes

Apply the manifests with kubectl:

kubectl apply -f deployment.yaml
kubectl apply -f service.yaml

Check deployment status:

kubectl get deployments

kubectl get pods

kubectl get svc hello-node-service

If everything worked, you'll see active pods and the service exposing port 32000.


Step 5 (Local Test): Access Your App

For minikube, run:

minikube service hello-node-service --url

It will print something like:

http://192.168.99.100:32000/

Open that URL in your browser—or run curl:

curl http://192.168.99.100:32000/
# Output should be "Hello from container!"

For other clusters, expose accordingly or port-forward temporarily:

kubectl port-forward svc/hello-node-service 3000:3000

curl http://localhost:3000/
# Output again should be "Hello from container!"

Optional Tweaks & Tips

  • Rolling Updates: Just change your Deployment’s image tag/version and run kubectl apply -f deployment.yaml again for zero-downtime upgrades.
  • Resource Limits: Production-grade workloads need CPU/memory limits defined inside spec.
  • Probe Health Checks: Add readinessProbe & livenessProbe for robustness.
  • Namespaces: Organize by using namespaces instead of default.

Summary

You now have full control over deploying any Docker container into Kubernetes without relying on heavy abstraction tools or guesswork.

To recap:

  1. Build & push your image.
  2. Write Deployment YAML targeting that image.
  3. Expose via Service YAML.
  4. Apply both manifests (kubectl apply) onto your cluster.
  5. Access and validate your live app endpoint.

Mastering these steps bridges gap between local development and production-grade orchestration — essential knowledge for any cloud-native developer today.

Happy clustering! 🚀


If you want me to cover more advanced scenarios like Helm charts or CI/CD pipelines next time, just let me know!