From AWS Console to Terraform: Migrating Real Infrastructure for Reliability and Automation
Manual AWS resource management is not scalable. Inconsistent state, missing documentation, and inability to audit changes all lead to operational headaches. For long-term maintainability and security, infrastructure should be managed as code—Terraform is a practical, widely adopted option for AWS environments.
Real-World Motivation
Consider a common scenario: a production S3 bucket housing assets, created two years ago via AWS Console. Its configuration is opaque—no source of truth, no audit trail. A compliance request lands asking for versioning status, encryption settings, and recovery procedures. Manual inspection barely suffices. In this environment, predictability vanishes.
Migrating Resources: The Path from Console to Code
Prerequisites
- Terraform ≥ v1.3.0
- AWS CLI ≥ v2.7.0
- Proper IAM access to query and manage your AWS environment.
Configure credentials:
aws configure
Or, export:
export AWS_ACCESS_KEY_ID="AKIA..."; export AWS_SECRET_ACCESS_KEY="..."; export AWS_DEFAULT_REGION="us-east-1"
Project Initialization
mkdir aws-to-terraform && cd aws-to-terraform
touch main.tf
Provider block in main.tf
:
provider "aws" {
region = "us-east-1"
}
Then:
terraform init
If you see:
Initializing provider plugins...
- Finding hashicorp/aws versions matching ">= 4.0.0"...
you're ready. If not, verify your main.tf
and credentials.
Importing Existing Resources
Direct import is the starting point, not the end. Import brings the current state into Terraform but does not auto-generate configuration. You'll need to reconstruct .tf
definitions matching the live settings—otherwise, the first plan
will show "wants to change everything" drift.
S3 Bucket: Example
First, author the resource:
resource "aws_s3_bucket" "assets" {
bucket = "prod-assets-2021"
# Additional arguments may be required post-import
}
Import command:
terraform import aws_s3_bucket.assets prod-assets-2021
Now:
terraform plan
See non-empty output? Some fields (like ACLs, versioning, encryption) aren’t in your config:
# aws_s3_bucket.assets will be updated in-place
~ versioning {
enabled = false -> true
}
~ server_side_encryption_configuration {
...
}
Iterate: adjust main.tf
until plan
shows no changes.
Note: Not all arguments are importable or inferable; see Terraform AWS docs.
Infrastructure-at-Scale: Automation Tools
For accounts with significant resource sprawl, hand-authoring .tf
files is unrealistic.
Terraformer
Terraformer can reverse-engineer existing AWS resources to .tf
+ .tfstate
. Caveats: results need heavy review; some constructs export poorly; names may be non-deterministic.
Installation (on macOS):
brew install terraformer
Example extraction (EC2 + VPC):
terraformer import aws --resources=vpc,ec2 --regions=us-east-1
- Output: folders containing
.tf
and.tfstate
files. - Typical gotcha: resource naming is machine-generated—refactor as needed.
- Logs may show:
2023/10/07 12:15:00 aws_vpc.main not found, skipping
Some resource types require extra permissions or explicit resource IDs.
Collaborative Workflow: Remote State
For team environments, local .tfstate
quickly becomes a liability.
An S3 backend with DynamoDB locking:
terraform {
backend "s3" {
bucket = "infra-tf-state"
key = "prod/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "tf-lock-table"
encrypt = true
}
}
Initialize (or reconfigure):
terraform init -reconfigure
Known Issue: S3 and DynamoDB tables must pre-exist. Circular dependency—use CLI or CloudFormation to bootstrap these.
Best Practices and Non-Obvious Tips
- Always inspect the generated
terraform plan
before any apply. - Parameterization: Use
variables.tf
for environment-specific values (e.g., AMI IDs, tags). - Modules: Factor repeated architecture (VPCs, security groups) into local modules.
- Never commit secrets. Leverage
terraform-aws-secrets-manager
provider, environment variables, or Vault. - State security: Encrypt state files at rest and in transit; restrict S3 bucket access.
- For large imports, split state imports into batches; massive single imports risk “state explosion” and timeouts.
- After initial import, lifecycle blocks such as
ignore_changes
can reduce noisy diffs as you refine parity.
Final Words
Most “lift-and-shift” migrations reveal configuration drift and incomplete documentation. Expect surprises, and prioritize critical resources first. Manual review of generated HCL is mandatory. For every resource: does its imported configuration meet security and compliance requirements? Legacy IAM roles and security groups tend to be the messiest—plan for cleanup sprints after initial migration.
Out-of-scope
Automated import for certain resource types (e.g., Lambda event sources, API Gateway routes) generates incomplete HCL. In those cases, hybrid management or phased refactoring is more reliable.
Summary Table: Typical Import Complexity
Resource | Import Feasibility | Post-Import Parity | Automation Support |
---|---|---|---|
S3 Bucket | High | Good (with tuning) | Yes |
EC2 Instance | Medium | Manual adjustment | Yes |
IAM Roles | Low-Medium | Manual patches | Partial |
VPC/Subnet | High | Good | Yes |
If you need Lambda, ECS, or advanced networking examples, those require distinct migration strategies—consider module decomposition before import. For feedback or specific edge cases (multi-account, hybrid cloud), note the gaps above.