Terraform To Pulumi

Terraform To Pulumi

Reading time1 min
#Cloud#DevOps#Infrastructure#Terraform#Pulumi#IaC

Seamlessly Transitioning Infrastructure as Code: Migrating from Terraform to Pulumi

Migrating Infrastructure as Code (IaC) systems isn’t about chasing trends—it's about eliminating bottlenecks and integrating infrastructure with software lifecycles more effectively. If your Terraform-managed cloud estate is becoming difficult to extend or automate, then moving to Pulumi, with its native language support and imperative actions, is worth considering. Here’s what’s involved, from initial audit to final cutover, including details that most guides gloss over.


Why Switch from Terraform to Pulumi?

Pulumi’s edge: tight coupling with real programming languages (Python, TypeScript, Go, and C#), no need to bend HCL into awkward shapes, cleaner logic branching, better code reuse via CI-integrated testing, and easier abstraction of complex modules. But: the migration demands understanding resource state handling, reconciling provider APIs, and retraining teams.

If you have complex dependency graphs, dynamic infrastructure, or want infrastructure logic in the same repo as application code, Pulumi makes life easier. Just don’t expect a 1:1 translation—plan for rewrites, not conversions.


Baseline: Audit Your Existing Terraform Footprint

Start with a comprehensive asset inventory.

  • Identify all resource definitions (.tf files), provider versions, backend config, and module boundaries.
  • Extract the state file (terraform.tfstate). If you’re using remote state (e.g., S3/DynamoDB or GCS), pull a local copy.
  • Diagram resource dependencies. Even Pulumi’s import cannot resolve ambiguous links “out of the box”.
terraform version                     # Record the CLI version (e.g. v1.5.0)
terraform providers                   # Generate a list of providers
terraform graph | dot -Tpng > graph.png  # Visualize dependency graph (needs 'dot')

Note: Anything defined with indirect references (e.g., via count or for_each) requires special care to reconcile.


Install Pulumi and Prep Your Toolchain

Install the CLI (ensure parity across dev/CI environments).

# For a node toolchain:
npm install -g @pulumi/cli@3.113.0

# On macOS or Linux systems:
brew install pulumi          # Or use the official .deb/.rpm for Linux servers

Authenticate against your chosen backend (Pulumi Service, S3, Azure Blob, etc.):

pulumi login                 # Defaults to app.pulumi.com. Add s3://bucket for AWS-style state.

# Typical errors if backend misconfigured:
# error: could not access bucket: AccessDenied: Access Denied

Side note: Pin your CLI and language SDK versions (package.json for JS/TS, requirements.txt for Python) to avoid subtle provider drift.


Initialize a Pulumi Project in Your Stack’s Language

Pulumi wants an explicit project scaffold. Use the generator; it creates the basic YAML file, dependency manifest, and main file.

pulumi new aws-typescript --generate-only -n infra-migration         # For TypeScript/AWS
cd infra-migration
npm install
  • For Python: pulumi new aws-python
  • For Go: pulumi new aws-go

The key file: Pulumi.yaml—this identifies the stack and language. Version it with git.


Import Existing Resources to Pulumi State

Critical: Don’t skip this—directly importing resources avoids downtime and accidental deletion.

Use pulumi import for each resource. This binds cloud resources to Pulumi’s state backend, preserving live infrastructure.

# General form:
pulumi import <provider-type> <logical-name> <cloud-resource-id>

# AWS S3 example:
pulumi import aws:s3/bucket:Bucket data_bucket my-legacy-bucket

Batching imports? Write a shell script from terraform state list output. If you miss a dependency, Pulumi up will warn:

error: Preview failed: resource 'xyz' does not exist

Gotcha: Imported resources will not appear in code until you define them. Imports alone don’t protect against drifts.


Rewrite HCL to Real Code

This is the labor-intensive phase—mapping declarative blocks to “real” functions and objects. Rarely mechanical; manual refactoring is unavoidable where loops, conditionals, or dependencies in HCL are lofty.

Translation Table:

Terraform HCLPulumi (TypeScript, example)
resource blocknew <Resource>()
Output, locals, varsconst, functions, class properties
ModulesCustom component resources (class)
Interpolation (${})Native string or function composition
count, for_eachArray .map() or loops

Real Terraform block:

resource "aws_instance" "db" {
  count      = 2
  ami        = "ami-03d5c68bab01f3496"
  instance_type = "t3.micro"
}

Pulumi (excerpt):

const dbInstances = Array.from({ length: 2 }).map((_, i) =>
  new aws.ec2.Instance(`db-${i}`, {
    ami: "ami-03d5c68bab01f3496",
    instanceType: "t3.micro",
  })
)

Note: Refactoring strange depends_on kludges is often needed; Pulumi resolves dependencies through references, not explicit lists.


Migrate Variables, Outputs, and Config

Map Terraform variables to Pulumi Config.

Terraform (variables.tf):

variable "db_count" {
  type    = number
  default = 2
}
output "db_arns" {
  value = aws_instance.db[*].arn
}

Pulumi (TypeScript):

const config = new pulumi.Config();
const dbCount = config.getNumber("db_count") || 2;

export const dbArns = dbInstances.map(inst => inst.arn);

Pulumi config values are set with pulumi config set. For automation, values are frequently passed as ENV vars in CI. Dynamic stack configs can result in mysterious errors if missing (Error: Config value 'db_count' is not set). Audit all config surface.


Dry Run and Validate—Don’t Trust, Verify

Run a preview. Expect noise if state and declarations differ.

pulumi preview     # Outputs a plan; cancelled if differences are destructive.
pulumi up          # Applies the changes (with -y for auto-approval in CI).

Look for unexpected deletes (“delete”)—often means you’ve misnamed or mis-imported resources. Always cross-check output with your live cloud console before approving.

If you hit drift or import conflicts, forcibly refresh state:

pulumi refresh

Known issue: Some providers (notably GCP, Azure) can misdiff minor metadata, showing “replace” when only a tag changed. Manual intervention may be needed.


Pipeline Integration: CI/CD

Swap out all terraform plan/terraform apply invocations for Pulumi equivalents.

Example: GitHub Actions

- uses: pulumi/actions@v3
  with:
    command: up --yes --stack prod
  env:
    PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}

Set secrets for the backend. For large orgs, Pulumi Service gives per-resource access logs and policy enforcement, but S3/DynamoDB is still viable at scale.

Tip: Pulumi’s Automation API (node, python) can drive updates directly from other build steps (not just shellouts).


Migration Pitfalls and Non-Obvious Tips

  • Dual control is a trap: Don’t let both tools manage resources simultaneously. Cut all write access from Terraform post-import.
  • Component Resources: For complex interdependent modules, use custom classes to bundle logic—Pulumi supports real inheritance, unlike HCL modules.
  • Cross-check resource naming: Mismatches in logical vs. physical resource names can orphan resources.
  • Read provider plugin docs: Minor differences exist (e.g., default tags, region fallback) that don’t always line up with Terraform.
  • Incremental cutover: For sensitive stacks (production, databases), do staged migration, validating with a canary environment using real cloud billing graphs, not just preview diffs.

Conclusion: Is It Worth It?

With Pulumi, infrastructure code is not just “declarative description” but fully programmable—if you need that expressiveness or if your team wants infra and app code in one DX, the migration pays for itself. But getting there is a hands-on process, not a transliteration. Expect edge cases.

Bottom line: Map, import, rewrite, test. Avoid dual control. Refactor where it hurts. And, above all, tie state handling to real-world infrastructure before writing any migration code.


Want real-world repo samples for AWS/GCP/Azure or deeper module migration patterns? Leave a note or open an issue—there’s no one-size-fits-all approach for this kind of migration.