Deploy Docker To Google Cloud

Deploy Docker To Google Cloud

Reading time1 min
#Cloud#DevOps#Containers#Docker#GoogleCloud#Kubernetes

Mastering Docker Deployment on Google Cloud: A Step-by-Step Guide to Scalable Containerization

Efficiently deploying Docker containers to Google Cloud unlocks seamless scalability and robust infrastructure management, essential for modern cloud-native applications and rapid development cycles.

Most guides focus on the basics, but this approach dives into optimizing deployment pipelines on Google Cloud, exposing hidden pitfalls and advanced configurations that truly empower developers to harness cloud power—not just use it.


Introduction

If you’re a developer or DevOps engineer looking to elevate your container deployment game, deploying Docker containers on Google Cloud Platform (GCP) is an indispensable skill. While Docker streamlines app packaging and dependencies, coupling it with GCP’s scalable infrastructure makes your applications resilient, scalable, and ready for the real world.

This step-by-step guide will take you beyond a simple “docker run” command to show you how to deploy containers effectively on Google Cloud using Google Kubernetes Engine (GKE) and Cloud Run — two of the most powerful container orchestration and deployment services offered by GCP.


Why Deploy Docker Containers on Google Cloud?

  • Scalability: Automatically scale up/down based on demand.
  • Managed Infrastructure: No need to worry about underlying server maintenance.
  • Integration: Seamless connection with other GCP services like Cloud SQL, Pub/Sub, Stackdriver.
  • Efficiency: Reduced deployment time with continuous delivery pipelines.

Prerequisites

Before we dive in:

  • A Google Cloud account with billing enabled.
  • gcloud CLI installed on your machine.
  • Docker installed locally.
  • Basic knowledge of Docker commands.

1. Preparing Your Docker Application

Let’s start with a simple Node.js app as an example.

app.js:

const express = require('express');
const app = express();
const port = process.env.PORT || 8080;

app.get('/', (req, res) => {
  res.send('Hello from Docker on Google Cloud!');
});

app.listen(port, () => {
  console.log(`App listening at http://localhost:${port}`);
});

Dockerfile:

FROM node:14-alpine

WORKDIR /usr/src/app

COPY package*.json ./
RUN npm install --production

COPY . .

EXPOSE 8080

CMD ["node", "app.js"]

Build your Docker image locally:

docker build -t gcr.io/[PROJECT-ID]/sample-app:v1 .

Replace [PROJECT-ID] with your actual GCP project ID.


2. Push Your Image to Google Container Registry (GCR)

Authenticate gcloud to access GCR:

gcloud auth configure-docker

Push your image:

docker push gcr.io/[PROJECT-ID]/sample-app:v1

You can verify this at Google Container Registry Console.


3. Deploying on Google Kubernetes Engine (GKE)

Set up a Kubernetes Cluster

Create a cluster that auto-adjusts node count between 1 and 5 (to achieve scalability):

gcloud container clusters create sample-cluster \
  --zone us-central1-a \
  --num-nodes=1 \
  --enable-autoscaling --min-nodes=1 --max-nodes=5 \
  --machine-type=e2-medium

Get authentication credentials for kubectl:

gcloud container clusters get-credentials sample-cluster --zone us-central1-a

Deploy Your Container as a Kubernetes Deployment

Create a deployment YAML (deployment.yaml):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-app-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: sample-app
  template:
    metadata:
      labels:
        app: sample-app
    spec:
      containers:
      - name: sample-app-container
        image: gcr.io/[PROJECT-ID]/sample-app:v1
        ports:
        - containerPort: 8080

Apply it:

kubectl apply -f deployment.yaml

Expose your deployment via a LoadBalancer service (service.yaml):

apiVersion: v1
kind: Service
metadata:
  name: sample-app-service
spec:
  type: LoadBalancer
  selector:
    app: sample-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080

Apply it:

kubectl apply -f service.yaml

Get the External IP Address

Run:

kubectl get svc sample-app-service --watch

Once the EXTERNAL-IP is populated, open it in your browser — you should see:

Hello from Docker on Google Cloud!

Pro Tip: Use Horizontal Pod Autoscaler (HPA)

Enable autoscaling for your pods based on CPU usage:

kubectl autoscale deployment sample-app-deployment --min=2 --max=10 --cpu-percent=50 

This gives you fine-grained control over scaling based on real-time metrics.


4. Deploying with Cloud Run for Serverless Containers

If managing Kubernetes sounds complex or overkill for your application, Cloud Run offers serverless container hosting with autoscaling out of the box.

Deploy Image Directly From Container Registry

Run this command:

gcloud run deploy sample-cloudrun-service \
    --image gcr.io/[PROJECT-ID]/sample-app:v1 \
    --platform managed \
    --region us-central1 \
    --allow-unauthenticated 

You’ll be prompted to allow unauthenticated access — say yes unless you want to configure authentication.

Cloud Run hides all infrastructure details and automatically scales from zero to many instances depending on demand. Perfect for APIs or microservices!


Hidden Pitfalls & Tips

  • Beware of Cold Starts in Serverless:
    Cloud Run scales down to zero, so the first request after idleness can be slower. If low latency matters, consider keeping minimum instances (--min-instances flag).

  • Resource Limits Matter:
    Set CPU and memory requests/limits properly in GKE deployments (resources: field). Under-provisioning leads to throttling; over-provisioning wastes money.

  • Use Service Accounts Wisely:
    Grant least privilege permissions for security in both GKE nodes and Cloud Run services.

  • Automate with CI/CD Pipelines:
    Leverage Cloud Build or GitHub Actions integrated with GCR & GKE/Cloud Run for seamless deployments each push.


Wrapping Up

Deploying Docker containers on Google Cloud doesn’t have to be intimidating. Whether you opt for the flexibility of Kubernetes or simplicity of serverless containers with Cloud Run, both approaches bring robust scalability suited to modern applications.

This guide walked through building your first image, pushing it securely to GCP storage buckets (Container Registry), then reliably deploying it while optimizing resource usage and deployment workflows.

With these core skills mastered, you’re ready to deepen cloud-native expertise—building fast, scalable application platforms without reinventing infrastructure wheels every time you deploy.

Happy Dockering! 🚢☁️


Additional Resources


Feel free to leave questions or share your own tips below in the comments!