Dynamodb To S3

Dynamodb To S3

Reading time1 min
#Cloud#AWS#BigData#DynamoDB#S3#DataPipeline

Efficiently Offloading DynamoDB Data to S3 for Scalable Analytics Pipelines

Why settle for DynamoDB’s limitations on querying and analytics when you can seamlessly export your data to S3 for unlimited downstream opportunities? Exporting data from DynamoDB to Amazon S3 unlocks powerful analytics capabilities by leveraging cost-effective, durable storage with broad ecosystem integration. This process is critical for teams seeking to optimize cost while scaling data workflows beyond real-time transactional use cases.

In this post, I’ll walk you through practical steps to efficiently offload your DynamoDB data into S3. Along the way, I’ll share best practices to help you maintain data integrity, avoid common pitfalls, and ensure your analytics pipeline is scalable and cost-effective.


Why Export DynamoDB Data to S3?

DynamoDB shines as a managed NoSQL database designed for low-latency, high-throughput transactional workloads. However, when it comes to complex queries, analytics, or large-scale batch processing, it’s not the best fit—largely due to:

  • Limited querying options (no joins, limited ad hoc query capabilities)
  • Costs scaling linearly with read/write throughput
  • No native support for complex analytics like windowing or multi-dimensional aggregations

Amazon S3 shines as an analytics data lake: it’s massively scalable, cost-efficient, and integrates seamlessly with tools like AWS Athena, Redshift Spectrum, EMR, and countless third-party analytics solutions.

By offloading snapshots or incremental extracts of your DynamoDB data to S3, you give your analysts and data engineers a flexible, open environment for deep insights.


Overview of Offloading DynamoDB Data to S3

There are a few ways you can move data:

  1. AWS DynamoDB Export to S3 (Native Export)
  2. Using AWS Data Pipeline or Glue ETL jobs
  3. Custom scripts using DynamoDB Streams + Lambda
  4. Full table scans with SDK (e.g., Python boto3) followed by uploading data

This post focuses primarily on the native export and a practical custom approach, because each has pros and cons.


Method 1: Using DynamoDB Native Export to S3 (Recommended for Full Table Exports)

Amazon introduced native exports for DynamoDB, enabling you to export your table data directly to S3 as Apache Parquet files without provisioning reads or impact to your workload.

How to do it:

  1. Open AWS Console → DynamoDB → Tables → Select your table.
  2. Choose Export to S3.
  3. Specify the S3 bucket and prefix where you want the data.
  4. Select the export format (Parquet is preferred for analytics).
  5. Start the export job.

What happens under the hood?

  • DynamoDB performs a server-side snapshot of your table.
  • Data is written to S3 as Parquet files for efficient storage & query.
  • Costs are tied to exported data volume and S3 storage.

Pros

  • No impact on DynamoDB table performance.
  • No need to provision read capacity or write code.
  • Data is immediately queryable with AWS Athena and other tools.

Cons

  • Bulk export only (you can’t export just a partition or filtered items).
  • No native incremental/CDC export.

Example Use Case

If you want to create a daily snapshot for analytics or downstream ML pipelines, set up a CloudWatch Event or Step Function to trigger exports on a schedule.


Method 2: Incremental Offload Using DynamoDB Streams + Lambda to S3 (For Real-time / Near Real-time Scenarios)

Sometimes, you want incremental changes or a continuous flow instead of bulk snapshots. DynamoDB Streams capture real-time data modifications as a log you can consume.

How this works:

  • Enable DynamoDB Streams on your table.
  • Create a Lambda function triggered by stream events.
  • The Lambda processes INSERT/UPDATE/REMOVE events, transforms or batches them.
  • Writes the resulting data (e.g., JSON files, Parquet) to S3.

Advantages

  • Near real-time sync of changes to S3.
  • Stream-based processing enables CDC (Change Data Capture).
  • You can customize the schema, filtering, and data enrichment before writing to S3.

Considerations

  • Lambdas have execution time & memory limits — careful batching is necessary.
  • Error handling and data duplication must be planned.
  • Additional complexity compared to native exports.

Sample Lambda Pseudo-Code

import json
import boto3
import gzip
from io import BytesIO
from datetime import datetime

s3 = boto3.client('s3')
bucket = 'your-export-bucket'
prefix = 'dynamodb-stream-exports/'

def lambda_handler(event, context):
    records = []
    for rec in event['Records']:
        # DynamoDB stream event data
        stream_image = rec['dynamodb']['NewImage']
        # Transform DynamoDB JSON to plain JSON here (implement your converter)
        json_rec = dynamodb_stream_to_json(stream_image)
        records.append(json_rec)
    
    if records:
        # serialize to JSON Lines
        data = '\n'.join(json.dumps(r) for r in records).encode('utf-8')
        
        # (Optional) Compress the output
        out_buffer = BytesIO()
        with gzip.GzipFile(fileobj=out_buffer, mode='wb') as gz:
            gz.write(data)
        out_buffer.seek(0)
        
        # Create key with timestamp
        key = f"{prefix}batch_{datetime.utcnow().strftime('%Y%m%dT%H%M%SZ')}.json.gz"
        s3.put_object(Bucket=bucket, Key=key, Body=out_buffer)
    
    return {'statusCode': 200}

This Lambda batches stream records, converts them to JSON, compresses them, and writes to S3 for downstream analytics.


Method 3: Full Table Scan with SDK (Python Example)

If you need offline or ad-hoc export and you cannot use native exports (e.g., due to region limitations), you can do a scan with pagination.

A typical approach:

import boto3
import json

dynamodb = boto3.client('dynamodb')
s3 = boto3.client('s3')

TABLE_NAME = 'YourDynamoDBTable'
BUCKET = 'your-bucket'
KEY = 'exports/dynamodb-export.json'

def scan_table():
    paginator = dynamodb.get_paginator('scan')
    items = []
    for page in paginator.paginate(TableName=TABLE_NAME):
        items.extend(page['Items'])
    return items

def main():
    items = scan_table()
    
    # Convert DynamoDB JSON to standard JSON (implement converter or use boto3 utilities)
    converted_items = [deserialize_dynamodb_json(i) for i in items]
    
    # Serialize
    data = json.dumps(converted_items)
    
    # Upload to S3
    s3.put_object(Bucket=BUCKET, Key=KEY, Body=data)
    print(f"Exported {len(items)} items to s3://{BUCKET}/{KEY}")

if __name__ == '__main__':
    main()

Important Notes:

  • Full scans are expensive and can impact your table’s provisioned capacity.
  • Implement exponential backoffs or run during low-traffic hours.
  • Use ProjectionExpression to limit attributes if you don’t need all.
  • This approach is manual and less scalable than native exports.

Best Practices for Data Integrity & Scalability

  • Use consistent data serialization: DynamoDB stores attributes in a special JSON format (with type annotations). Use libraries like boto3.dynamodb.types.TypeDeserializer or AWS Glue Crawlers to handle conversions.
  • Partition your exported data: When exporting large datasets, split files by date/time or using shard keys to improve parallel processing downstream.
  • Monitor export jobs and handle failures: Use CloudWatch alarms and Lambda dead-letter queues to catch export errors.
  • Compress files on S3: Use gzip or parquet to reduce storage costs and improve query speed.
  • Tag your data exports: Enable lifecycle policies (move to Glacier, delete old exports) to save money.

Wrapping Up

Migrating your DynamoDB data to S3 unlocks massive analytic flexibility and cost efficiencies. Whether you choose the easy, managed native export for full snapshots or build incremental pipelines with streams and Lambda, the ability to systematically move and store your DynamoDB data in S3 is foundational for scaling your data workflows beyond transactional apps.

Here’s a quick cheat sheet to choose your approach:

Use CaseRecommended Method
Full Table Snapshot ExportDynamoDB Native Export to S3
Incremental Real-time ExportDynamoDB Streams + Lambda
Ad-hoc or Region UnsupportedSDK Scan + S3 Upload

Start by experimenting with native exports for your tables today! Then, build incremental pipelines if you need granular data freshness.


If you found this guide useful or want me to dive deeper into Lambda stream processing or query patterns on Athena over DynamoDB-exported data, drop me a note below!