How to Use AWS Lambda to Write Data into S3: A Step-by-Step Practical Guide
Cloud computing has transformed how we build modern applications. Among AWS’s vast offerings, AWS Lambda and Amazon S3 are two powerful services that often work together. Lambda lets you run serverless functions without managing infrastructure, while S3 offers scalable storage for almost any type of data.
In this blog post, I’ll show you how to write data from an AWS Lambda function directly into an S3 bucket. Whether you want to save logs, process images, or export reports on the fly, this practical guide will walk you through everything you need—from setup to code examples.
Why Write to S3 from Lambda?
Writing data from Lambda to S3 is a common pattern for serverless workflows because:
- Durability and scalability: S3 provides virtually unlimited storage with 99.999999999% durability.
- Integration: Many AWS services can easily consume data stored in S3.
- Cost-Efficiency: You pay only for what you use and no upfront storage costs.
- Event-Driven Architecture: You can trigger downstream workflows after writing data (e.g., analyzing logs).
Now that we appreciate the benefits, let’s jump into the “how”.
Prerequisites
Before you start:
- An AWS account with necessary permissions (IAM roles for Lambda & S3).
- An existing or newly created S3 bucket where your lambda function will write objects.
- AWS CLI configured locally or AWS Console access.
Step 1: Create an S3 Bucket
If you don’t have an S3 bucket already:
- Log in to the AWS Management Console.
- Click “Create Bucket.”
- Choose a globally unique bucket name (e.g.,
my-lambda-output-bucket
). - Select your preferred region.
- Leave other settings as default for now.
- Click “Create.”
Make note of this bucket name; it will be referenced in the Lambda code.
Step 2: Create an IAM Role with Proper Permissions
Your Lambda function needs permissions to write objects into the S3 bucket.
- Go to the IAM Console.
- Create a new Role and choose “Lambda” as trusted entity.
- Attach the following policy (you can refine policy scope by naming specific resources):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:PutObject"],
"Resource": ["arn:aws:s3:::my-lambda-output-bucket/*"]
}
]
}
Replace "my-lambda-output-bucket"
with your actual bucket name.
- Save the role’s name; you will assign it when creating your Lambda function.
Step 3: Write the AWS Lambda Function Code
Below is an example Node.js lambda function that writes a simple text file containing some data into your designated S3 bucket.
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
exports.handler = async (event) => {
const bucketName = 'my-lambda-output-bucket'; // Replace with your bucket name
const objectKey = 'sample-data-' + Date.now() + '.txt';
const content = 'Hello from AWS Lambda! Current timestamp: ' + new Date().toISOString();
const params = {
Bucket: bucketName,
Key: objectKey,
Body: content,
ContentType: 'text/plain'
};
try {
await s3.putObject(params).promise();
console.log(`Successfully uploaded object ${objectKey} to bucket ${bucketName}`);
return {
statusCode: 200,
body: JSON.stringify({
message: `File saved successfully as ${objectKey}`
}),
};
} catch (error) {
console.error('Error writing to S3:', error);
return {
statusCode: 500,
body: JSON.stringify({
message: 'Failed to save file',
errorMsg: error.message
}),
};
}
};
Explanation:
- We instantiate an
S3
client viaAWS SDK
. - Compose unique file names with a timestamp so files don’t overwrite each other.
- Prepare content (
Body
) which is saved as text/plain. - Use
s3.putObject()
method wrapped in async/await. - Return proper success or failure messages for debugging/easy integration.
Step 4: Create & Deploy the Lambda Function
- Open AWS Lambda Console.
- Click “Create function.” Choose “Author from scratch.”
- Function name:
WriteToS3Function
- Runtime: Node.js 16.x (or latest supported)
- Execution role: Use existing role created above.
- Function name:
- Paste your code into the inline editor or upload ZIP with dependencies if needed.
- Deploy.
Step 5 (Optional): Test Your Function
Within the Lambda console:
- Click “Test.”
- Configure a test event (you can use the default Hello World template; input content doesn’t affect this simple example).
- Run and check logs/output.
- Verify file creation by navigating back to your S3 bucket — you should see a new
.txt
file created with your message.
Best Practices When Writing To S3 from Lambda
- Permission Scoping: Grant least privilege IAM policies for enhanced security.
- Retries & Error Handling: Incorporate retries or dead-letter queues if critical writes fail.
- File Size Limits: Be mindful of maximum payload sizes—Lambda invocation limits vs multipart upload if large files needed.
- Content Type & Metadata: Add meaningful metadata/CORS headers if files served directly from S3 endpoints.
- Optimizing Cold Starts: Keep SDK clients outside of handler if possible for better performance.
Conclusion
With just a few steps and lines of code, your serverless function can persist useful data directly into scalable cloud storage using AWS Lambda and Amazon S3 together — no servers required! This model unlocks many possibilities like event-driven ETL pipelines, log processing tools, or dynamic content generation.
Try experimenting by modifying contents dynamically based on events such as API Gateway triggers or CloudWatch events — and watch your cloud-native solutions evolve!
Happy coding! 🚀
If you'd like me to create another example in Python or talk about triggering lambda from different events leading up to writing on s3, just ask!