Upload To Google Cloud Storage

Upload To Google Cloud Storage

Reading time1 min
#Cloud#Storage#Upload#GoogleCloudStorage#GCS#ResumableUploads

Uploading to Google Cloud Storage: Strategies for Scale, Integrity, and Performance

Moving data into Google Cloud Storage (GCS) is often the bottleneck lurking behind slow analytics pipelines, shifting SLAs, or unexpected cloud costs. Whether loading daily event logs or pushing multi-gigabyte datasets, the naïve copy-and-paste approach simply does not scale. Instead, leverage GCS APIs, optimize transfer strategies, and architect according to your data profiles.

Below—direct techniques and code samples for practical, resilient uploads, drawn from projects handling terabytes-to-petabytes per month. Python featured, but principles generalize.


Choosing the Right Upload Path

At a glance:

MethodUse CaseLimitations
Console uploadAd hoc/manualNot automatable, poor for scale
gsutil cpAutomatable scriptsCan’t parallelize per chunk/file
Python client + APIsCustom, scalableRequires workflow design
  • For automation, skip the web console. Use gsutil for simple, commandable uploads ≤GB scale, but note: it chokes on hundreds of small files or files >50GB.
  • Library-based or API-driven approaches (Python, Node.js, Java, etc.) permit precise control and error handling.

Resumable Uploads: Preventing Wasted Rework

Single large files are notorious for stalling or failing mid-transfer. Unexpected: GCS non-resumable uploads return only generic errors for partial transfers such as
google.api_core.exceptions.ServiceUnavailable: 503 POST ...

Resumable upload prevents replaying the entire transfer after a network blip. Enable via client configuration:

from google.cloud import storage

def upload_large_file(bucket_name, source_file_path, dest_blob_name):
    storage_client = storage.Client()
    bucket = storage_client.bucket(bucket_name)
    blob = bucket.blob(dest_blob_name)
    # Typically tune chunk size; default (256KB) often underperforms on 10Gb links
    blob.chunk_size = 8 * 1024 * 1024  # 8 MiB per chunk; test with your infra
    blob.upload_from_filename(source_file_path, rewind=True)
    print(f"{source_file_path} uploaded as {dest_blob_name}")

Optimization notes:

  • On unreliable lines (hotel Wi-Fi, shared links), favor smaller chunk_size.
  • With very large uploads (10GB+), monitor for ConnectionError or DeadlineExceeded, then retry the session (Python client will auto-retry up to a point).
  • GCS caps a single object at 5TiB. For 1TB+ files, chunk-and-compose (below) is mandatory to maintain reliability.

Push Small Files at Scale: Parallelism with Threads or Processes

Serial uploads of 30,000 photo thumbnails? Unacceptable—what takes hours can finish in minutes with concurrency.

Don’t thread for massive individual files—use for many small ones. Python’s concurrent.futures is fastest to integrate (multiprocessing gives better CPU scaling if you hit GIL-bound operations, but GCS upload is I/O-bound):

import concurrent.futures, os
from google.cloud import storage

def upload_file(bucket, src, dst):
    blob = bucket.blob(dst)
    blob.upload_from_filename(src)
    print(f"{src} -> {dst}")

def parallel_uploads(bucket_name, local_folder, remote_prefix):
    client = storage.Client()
    bucket = client.bucket(bucket_name)
    files = [f for f in os.listdir(local_folder) if not f.startswith('.')]
    with concurrent.futures.ThreadPoolExecutor(max_workers=8) as ex:
        ex.map(lambda fn: upload_file(bucket, os.path.join(local_folder, fn), f"{remote_prefix}/{fn}"), files)

# Example usage
parallel_uploads("my-bucket", "/mnt/batch_out", "archive/2024-06-05")

Known issue:

  • Library instantiates a new connection per thread. Open file limits (ulimit -n) can be exceeded on large servers—adjust your worker count or OS settings accordingly.
  • Some networks throttle parallel streams aggressively (especially corporate VPNs); always benchmark before tuning up to max.

Composite Objects: Assembling Large Files from Chunks

Uploading a 500GB tarball? To minimize failure domains and enable parallel transfer, split into smaller temporary blobs, upload those, then merge via GCS’s “compose” operation.

from google.cloud import storage

def compose_object(bucket_name, chunk_names, final_blob_name):
    client = storage.Client()
    bucket = client.bucket(bucket_name)
    source_blobs = [bucket.blob(name) for name in chunk_names]
    destination_blob = bucket.blob(final_blob_name)
    # Constraints: up to 32 components per compose; multi-pass works for hundreds of parts
    destination_blob.compose(source_blobs)
    print(f"Composed {final_blob_name} from {len(chunk_names)} chunks")

Practical tip:

  • Use a local tool like split (split --bytes=100M bigfile.dat part_) for chunking.
  • Compose in passes (binary tree merge) if >32 parts.
  • Delete chunks after compose to save cost—or you’ll pay for n+1 objects.

Compress Before You Send (Sometimes)

For plaintext logs or CSV, compressing before upload can cut network usage by 80–90%. For already-compressed formats (JPG, MP4, Parquet) this is wasted CPU.

import gzip
import shutil

def compress_and_upload(bucket_name, file_path, dest_blob_name):
    tmp_path = f"{file_path}.gz"
    with open(file_path, 'rb') as inf, gzip.open(tmp_path, 'wb') as outf:
        shutil.copyfileobj(inf, outf)
    client = storage.Client()
    bucket = client.bucket(bucket_name)
    blob = bucket.blob(dest_blob_name)
    blob.upload_from_filename(tmp_path)
    os.remove(tmp_path)

Gotcha:

  • GCS does not auto-uncompress on download. Consumers must gunzip or handle Content-Encoding themselves.
  • Consider storing with .gz extension and setting content-type appropriately:
    blob.content_type = "application/gzip"
    blob.patch()
    

Security and Access Control

Never embed static credentials in code. Instead:

  • Deploy via service account with least-privilege IAM.
  • For third-party or browser-side uploads, hand out signed URLs with fine-grained expiry:
    blob.generate_signed_url(expiration=timedelta(minutes=15), method="PUT")
    
  • Log all failed 403 or 401 events—frequently permission drift is the actual culprit behind “upload failed” bugs.
  • Track quota: 429 Too Many Requests signals per-project API limits, not network errors.

Error Handling and Retries

Network outages, API quota exhaustion, and preemption events are routine. Handle both retryable and permanent errors.

In production:

Error Message (sample)Next Step
503 ServiceUnavailableRetry with backoff
403 ForbiddenValidate service account/IAM
ConnectionResetError: [Errno 104]Retry; check bulkrate/firewall
google.api_core.exceptions.TooManyRequestsThrottle, monitor quota, page ops

Most GCS Python clients auto-retry safe errors. For custom logic:

  • Exponential backoff, randomized delays
  • Cap maximum retries, log persistent failures

Post-Upload: Metadata and Batch Operations

Bulk apply metadata only when all data is present. Don’t do per-object calls unless required; use batch API calls or gsutil -m setmeta.

Side note:
Some metadata—like retention policies—must be in place before upload, else later changes may not apply retroactively.


Summary Table: When to Use What

Data ProfileRecommended Technique
Single small file (<5MB)Simple client upload
Large file (>5MB, <10GB)Resumable upload
Many small filesThreaded/parallel upload
Single large file (>10GB)Chunk, upload, then compose
Plain text, compressibleGzip then upload
Pre-encrypted/compressedDirect upload

Closing

Efficient, scalable GCS ingestion is architected, not ad hoc.
Outages, quotas, and cost cliffs usually emerge only at scale—prototype under load, measure, and adjust. Whenever possible, pair uploads with monitoring (Stackdriver alerting on upload errors and object age).

If you need battle-tested ETL patterns, custom transfer retries, or analysis of advanced features (lifecycle rules, dual-region objects), don’t hesitate to reach out or review the GCS documentation (as of June 2024, v2.16.0+ client).

Deploy smart. Don’t pay the re-upload penalty.