Beginners Guide To Kubernetes

Beginners Guide To Kubernetes

Reading time1 min
#Cloud#DevOps#Containers#Kubernetes#Minikube#Deployment

Kubernetes Fundamentals: First Deployment, Real-World Workflow

No theory wall here: this is the core process for standing up workloads in Kubernetes—by example, using NGINX. The pipeline below takes you from local cluster bootstrapping to service exposure, matching standard patterns used in production, albeit on a smaller scale.


Prerequisites

  • Kubernetes cluster (v1.26+ recommended): For local development, Minikube ≥1.30 or KIND are sufficient. Cloud-managed clusters (GKE, EKS, AKS) are also valid—just mind permissions and context.
  • kubectl, version-matched to the cluster. Check compatibility with kubectl version --short.
  • Terminal access and basic comfort with container images. Prior Docker experience helps but isn’t mandatory.

Quick Primer: Core Kubernetes Objects

Most real-world Kubernetes tasks require only three building blocks:

ResourcePurpose
PodOne or more tightly coupled containers sharing storage/network.
DeploymentManages stateless replica sets, rolling updates, and scaling.
ServiceAbstracts pod networking; exposes workloads outside the cluster.

For now, ignore ConfigMaps, StatefulSets, and Ingress—they emerge in more advanced use cases.


Example: Deploying NGINX as a Stateless Workload

First, verify your cluster is accessible:

kubectl cluster-info

If not, set your context (kubectl config use-context <context-name>).


Step 1: Write a Deployment Manifest

Create nginx-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.25.3
          ports:
            - containerPort: 80

Note: Specifying nginx:1.25.3 avoids “latest” tag drift. Production clusters pin images for traceability.


Step 2: Apply the Deployment

kubectl apply -f nginx-deployment.yaml

Typical output:

deployment.apps/nginx-deployment created

Verify pod status:

kubectl get pods -l app=nginx -o wide

Pods in ContainerCreating longer than ~30 seconds often indicate underlying container runtime or image pull errors. Check with:

kubectl describe pod <pod-name>

or

kubectl logs <pod-name>

Common error:

Failed to pull image "nginx:1.25.3": rpc error: code = Unknown desc = Error response from daemon: pull access denied

If you see this, verify image name/tag or troubleshoot local registry mirrors.


Step 3: Expose the Deployment

Pods aren't reachable from outside by default. A Service is required.

kubectl expose deployment/nginx-deployment --type=NodePort --port=80

Check assigned NodePort:

kubectl get service nginx-deployment

Sample output:

NAME               TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx-deployment   NodePort   10.96.17.113   <none>        80:31823/TCP   23s

In this case, 31823 is the port to use below.


Access Patterns

Local Minikube:

minikube service nginx-deployment --url

This command forwards the NodePort to your host, e.g.,

http://192.168.49.2:31823

Browser should display the default NGINX welcome.

Cloud-Hosted Cluster:
Find a worker node’s external IP. Access:

http://<node-external-ip>:<NodePort>

Security controls (firewalls, security groups) must allow inbound traffic on that port.


Step 4: Scaling the Deployment

Increase pod count to four:

kubectl scale deployment/nginx-deployment --replicas=4

Confirm scaling:

kubectl get pods -l app=nginx

Pods added/removed to match the desired replica count. Unexpected pod restarts (high Restart counts) often reveal issues with liveness probes or image errors.


Practical Notes and Gotchas

  • NodePort caveat: NodePort services are rarely used in production due to hard-to-manage port ranges and lack of load balancing. For public-facing services, prefer type: LoadBalancer (in cloud providers) or set up Ingress controllers.
  • Resource cleanup: Remove all created resources with:
    kubectl delete service nginx-deployment
    kubectl delete deployment nginx-deployment
    
  • Manifest versioning: Tracking changes to deployment YAMLs in Git is standard practice; combine with CI/CD for automated rollouts.

Key Takeaways

  • Declarative manifests (YAML) codify contracts between developer intent and cluster state.
  • Deployments abstract away replica and rollout management—critical for reliable upgrades.
  • Services enable network exposure but must be chosen appropriate to environment (NodePort, LoadBalancer, Ingress).
  • Diagnostics routinely rely on kubectl logs, describe, and tight loop iteration.

Next Steps

Consider experimenting with:

  • Helm charts: For parameterizing and templating deployments.
  • PersistentVolumeClaims: For stateful workloads.
  • ConfigMap/Secret: For decoupling configuration.
  • Custom Images: Build and push your own workload, updating image references in manifests.

Not perfect, but the above workflow matches real-world deployment cycles for stateless apps. For more acute troubleshooting (e.g., network policies, liveness/readiness probes), detailed cluster observation tools like kubectl top or Prometheus/Grafana integrate tightly with this pattern.


Feedback on operational edge cases is encouraged—real failures build better mental models than “it just worked.”


#kubernetes #infra #containers #devops