Veeam Backup To Gcp

Veeam Backup To Gcp

Reading time1 min
#Cloud#Backup#DataProtection#Veeam#GCP#GoogleCloud

Veeam Backups to Google Cloud Platform: A Technical Deep Dive

When offsite backup becomes non-negotiable, the Veeam and GCP integration solves it at scale. Local SAN/NAS is fast—but not invincible. Ransomware, hardware failure, regional outages: seen it all. Pushing backup data to Google Cloud Storage (GCS) as a capacity tier in a Veeam Scale-out Backup Repository (SOBR) brings both resiliency and agility. Let’s get into the specifics, bypassing vendor fluff.


Business Case and Risks

Data retention policies, compliance, and recovery point objectives (RPOs) typically drive this architecture. For enterprises already running Veeam Backup & Replication (v11 or later), layering GCP storage cuts operational risk and limits capital expense. No more babysitting tape rotations. However, cloud egress fees and API throttles do lurk in the background—always factor those into your restoration SLAs.


Prerequisites (Don’t Skip)

  • Veeam Backup & Replication 11.0.0.837 or later (earlier versions lack native GCS integration).
  • GCP account with billing enabled; Owner or specific Storage Admin permissions required.
  • Familiarity with GCP IAM, bucket classes, and Veeam’s SOBR configuration.
  • Service account credentials (JSON)—not user credentials. Avoid legacy access keys.

1. Provision a GCS Bucket

Critical first step: choose bucket naming and redundancy to match your DR plan.

GCP Console: Storage > Browser > Create bucket
  • Bucket name: must be globally unique, e.g., org-veeam-prod-backup.
  • Location type:
    • Regional (low latency, fast recovery, higher cost).
    • Multi-region (recommended for disaster recovery).
  • Storage class:
    Use CaseClassNotes
    Daily opsStandardFor recent, hot data
    ArchiveNearline/Coldline/ArchiveLower cost, high restore latency

Note: Archive class cannot serve instant restores—trial restores before adopting.


2. Service Account Creation & Permissions

Veeam authenticates to GCP via service accounts. For backup/restore to function, the following IAM role is minimally required:

  • roles/storage.objectAdmin on the backup bucket.

Steps:

  1. IAM & Admin > Service Accounts > Create Service Account
  2. Name: veeam-backup-svc (avoid over-permissive global roles)
  3. Grant only storage permissions scoped to target bucket (use custom roles if possible).
  4. Generate JSON key—download and secure immediately.

Gotcha: Key loss requires creating a new key and updating all Veeam repository configurations.


3. Integrate GCP with Veeam SOBR

Veeam Console

  • Backup Infrastructure > Backup Repositories > Add Repository > Object Storage > Google Cloud Storage (do not use S3 Compatible starting v11)
  • Fill in:
    • Bucket name and (optionally) a folder prefix for granular separation.
    • Upload service account JSON at credentials prompt.
  • Connectivity test should yield:
    Connection to object storage established successfully.
    

If you see “The remote server returned an error: (403) Forbidden”, double-check the IAM role and scope against the bucket.


4. Assemble a Scale-out Backup Repository

SOBR builds logical storage from multiple heterogeneous storage backends.

  • Performance Tier: On-prem storage (e.g., local DAS, NAS, or SAN). Keeps latest backups for fast restore.
  • Capacity Tier: Newly linked GCP Object Storage bucket.

Configuration in Veeam:

  • Under SOBR, click “Add Extent” > add existing local repo as Performance Tier
  • Enable “Extend repository with object storage” and select configured GCS repo.
  • Offload policy: “Move backups to object storage as soon as possible”—suitable for cold-site DR, or set age threshold (e.g., 7 days) if WAN is a bottleneck.

Known issue: First offload can saturate the outbound link. For large datasets, consider seeding initial backups locally (Veeam’s copy job) before enabling auto-move.


5. Backup Job Targeting SOBR

Backup workflow must explicitly target the SOBR, not an individual repository.

  • Job scope: VMs, physical agents, or application workloads.
  • Schedule: Match RPO—incrementals daily, synthetic fulls weekly.
  • Advanced: Enable inline deduplication and LZ4/Super Compression in job settings to cut storage and traffic.

Example:

Backup Job: AppVMS-prod
Source: vSphere Cluster
Target: SOBR-GCP
Retention: 30 days (7 on-prem, 23 in GCS)

Jobs run as normal; offload to GCS is policy-driven in the background. Older restore points encoded in the capacity tier.


Example: VMware VM to GCS (Log Excerpt)

A typical backup to GCP offload produces logs like:

Moving backup file "VM01D20240601.vbk" to capacity tier...
[01.06.2024 02:13:05] Upload completed: 12 GB transferred at 190 Mbps
Offload job completed successfully

Errors to watch for:

  • 429 Too Many Requests: Hitting GCS API quota, adjust throttling or ask GCP for quota raise.
  • Restore: Coldline/Archive classes introduce minutes-to-hours retrieval delays.

Optimization—What the Docs Skip

  • Lifecycle rules: Use GCP bucket lifecycle management to auto-delete objects past compliance retention. Cuts storage spend independent of Veeam’s retention logic.
  • Bandwidth shaping: In Veeam, set offload window (e.g., nights/weekends) to avoid production impact.
  • Object Lock (Immutability): With Veeam v12+, GCP buckets configured with Object Lock enforce retention—ransomware resilience.

Alternatives:

  • “S3 Compatible” integration for hybrid scenarios targeting MinIO or Wasabi alongside GCP, but this disables some GCS-native features.
  • On Veeam <v11, manual scripting (gsutil) required—error prone and not supported.

Side Notes

  • Cloud restore speed is directly tied to WAN bandwidth and GCP bucket class.
  • Veeam’s Capacity Tier does not support instant VM recovery from object storage (as of v12). Local Performance Tier is the go-to for fast RTO.
  • Monitoring: Use Veeam ONE or GCP metrics dashboard to track egress/cost anomalies.

Not Perfect — but Robust

No approach is flawless. GCP storage egress fees can disrupt budget projections if mass restores are needed. Bucket-level immutability requires proper versioning setups, and management overhead exists if mixing multiple cloud targets. For most workloads, the tradeoff of rapid disaster recovery and tape elimination outweighs these drawbacks.


Questions on odd errors, bucket design, or multi-cloud SOBR interplay? Drop them below or hit up the docs—field experience varies by environment.