Tannerthings
foundation 35 minutes intermediate

Tutorial 05: Importing Existing Infrastructure

Adopt unmanaged cloud resources into Terraform without destroying and recreating them

Published February 12, 2025
Updated December 11, 2025
8 min read (1,459 words)
View Code on GitHub

Prerequisites

Complete these tutorials first: Tutorial 04: Remote State and Locking

Brutal Truth Up Front

Greenfield Terraform projects are rare. Most organizations have years of manually-created infrastructure that needs Terraform management without downtime.

Import is tedious and error-prone. You’ll write HCL to match existing resources, import them one by one, and verify state matches reality. It’s not fun, but it’s essential for brownfield adoption.

Prerequisites

  • Completed Tutorials 01-04
  • AWS account with existing resources (or we’ll create some manually)
  • Comfort with Terraform state commands
  • Understanding that import doesn’t generate HCL - you write it manually

What You’ll Build

Manually create AWS resources via console, then import them into Terraform management. You’ll learn:

  • Writing HCL to match existing resource attributes
  • Import command usage
  • State manipulation for complex scenarios
  • Verification techniques

The Exercise

Step 1: Create Resources Manually (Simulating Legacy Infrastructure)

Go to AWS Console and manually create:

  1. S3 Bucket:

    • Name: legacy-bucket-YOUR-INITIALS-12345
    • Region: us-east-1
    • Enable versioning
    • Add tags: Environment=Production, Team=Infrastructure
  2. Security Group:

    • Name: legacy-web-sg
    • Description: Legacy web server security group
    • VPC: Default VPC
    • Inbound rule: HTTPS (443) from 0.0.0.0/0
    • Tag: Environment=Production
  3. IAM Role (if you have permissions):

    • Name: legacy-app-role
    • Trust policy: EC2 service
    • No policies attached yet

Note down exact values - you’ll need them for HCL.

Step 2: Create Terraform Configuration to Match

Create main.tf that describes existing resources:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

provider "aws" {
  region = "us-east-1"
}

# S3 Bucket - must match existing attributes exactly
resource "aws_s3_bucket" "legacy" {
  bucket = "legacy-bucket-YOUR-INITIALS-12345"  # Exact name from console
  
  tags = {
    Environment = "Production"
    Team        = "Infrastructure"
  }
}

resource "aws_s3_bucket_versioning" "legacy" {
  bucket = aws_s3_bucket.legacy.id
  
  versioning_configuration {
    status = "Enabled"
  }
}

# Security Group - note the VPC dependency
data "aws_vpc" "default" {
  default = true
}

resource "aws_security_group" "legacy_web" {
  name        = "legacy-web-sg"
  description = "Legacy web server security group"
  vpc_id      = data.aws_vpc.default.id
  
  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  
  # AWS adds default egress rule - must include it
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  
  tags = {
    Environment = "Production"
  }
}

# IAM Role (if created)
resource "aws_iam_role" "legacy_app" {
  name = "legacy-app-role"
  
  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "ec2.amazonaws.com"
        }
      }
    ]
  })
}

Critical: Don’t apply yet! Terraform will try to create resources that already exist.

Step 3: Import Resources

Initialize without applying:

terraform init

Import the S3 bucket:

terraform import aws_s3_bucket.legacy legacy-bucket-YOUR-INITIALS-12345

Terraform adds the bucket to state. Verify:

terraform state list

You should see aws_s3_bucket.legacy.

Import versioning configuration (separate resource in AWS provider):

terraform import aws_s3_bucket_versioning.legacy legacy-bucket-YOUR-INITIALS-12345

Import security group:

terraform import aws_security_group.legacy_web sg-XXXXX

Find the security group ID from console or:

aws ec2 describe-security-groups --filters Name=group-name,Values=legacy-web-sg --query 'SecurityGroups[0].GroupId' --output text

Import IAM role:

terraform import aws_iam_role.legacy_app legacy-app-role

Step 4: Verify Import Accuracy

Run plan:

terraform plan

Goal: No changes. Your infrastructure matches the configuration.

Reality: You’ll probably see differences:

  ~ resource "aws_security_group" "legacy_web" {
        name        = "legacy-web-sg"
      ~ description = "Legacy web server security group" -> "Managed by Terraform"
      # ...
    }

This means your HCL doesn’t match AWS reality. Common mismatches:

  1. Missing attributes - AWS sets defaults you didn’t specify
  2. Ordering differences - CIDR blocks in different order
  3. Computed values - ARNs, timestamps, AWS-generated IDs

Fix these iteratively:

# Show current state
terraform state show aws_security_group.legacy_web

# Compare with your HCL, adjust HCL to match
# Run plan again
terraform plan

Repeat until plan shows zero changes.

Step 5: State Manipulation Commands

List resources:

terraform state list

Show resource details:

terraform state show aws_s3_bucket.legacy

Remove from state (doesn’t delete AWS resource):

terraform state rm aws_s3_bucket.legacy

Move/rename in state:

terraform state mv aws_s3_bucket.legacy aws_s3_bucket.production

This changes the resource address in state without touching AWS.

The Break (Intentional Failure Scenarios)

Scenario 1: Import Without Writing HCL First

Try importing without resource block:

# Comment out the aws_s3_bucket.legacy resource in main.tf
terraform import aws_s3_bucket.legacy legacy-bucket-YOUR-INITIALS-12345

Error: Resource aws_s3_bucket.legacy not found in configuration.

Import requires the resource block to exist first. Terraform doesn’t generate HCL.

Scenario 2: Attribute Mismatch After Import

Import the bucket, then try to change its name:

resource "aws_s3_bucket" "legacy" {
  bucket = "different-name-12345"  # Changed
  # ...
}

Plan shows:

-/+ resource "aws_s3_bucket" "legacy" {
    ~ bucket = "legacy-bucket-YOUR-INITIALS-12345" -> "different-name-12345" # forces replacement

Terraform wants to destroy the imported bucket and create a new one! This defeats the purpose of import.

Lesson: After import, HCL must match reality. Don’t change attributes unless you intend to modify infrastructure.

Scenario 3: Import Without All Dependent Resources

Import just the S3 bucket without its versioning configuration. Plan shows:

  + resource "aws_s3_bucket_versioning" "legacy" {
      + bucket = "legacy-bucket-YOUR-INITIALS-12345"
      + versioning_configuration {
          + status = "Enabled"
        }
    }

Terraform wants to create versioning even though it’s already enabled. AWS provider splits resources - you must import both.

The Recovery

Recovering from Incorrect Import

If you imported with wrong attributes:

# Remove from state
terraform state rm aws_s3_bucket.legacy

# Fix HCL to match AWS reality
# Import again
terraform import aws_s3_bucket.legacy legacy-bucket-YOUR-INITIALS-12345

# Verify with plan
terraform plan

Bulk Import Strategy for Many Resources

For dozens of resources, use this workflow:

  1. Generate inventory of AWS resources:
aws ec2 describe-instances --output json > instances.json
aws s3api list-buckets --output json > buckets.json
# etc.
  1. Write skeleton HCL from inventory
  2. Create import script:
#!/bin/bash
while IFS= read -r line; do
  resource_type=$(echo $line | awk '{print $1}')
  resource_name=$(echo $line | awk '{print $2}')
  resource_id=$(echo $line | awk '{print $3}')
  
  terraform import "${resource_type}.${resource_name}" "${resource_id}"
done < import_list.txt
  1. Iterate until zero-diff plan

Exit Criteria

You understand this tutorial if you can:

  • Write HCL matching existing AWS resource attributes
  • Import resources using correct AWS identifiers
  • Diagnose plan differences between state and configuration
  • Use state commands to inspect and manipulate resources
  • Explain why import doesn’t generate HCL automatically

Key Lessons

  1. Import requires manual HCL - Terraform doesn’t generate configuration
  2. Attributes must match exactly - any mismatch causes unwanted changes
  3. Some resources are composite - S3 bucket + versioning are separate imports
  4. State manipulation is surgical - use state rm, state mv carefully
  5. Verification is iterative - expect multiple plan/fix cycles

Why This Matters in Production

On the Air Force program, we inherited 200+ manually-created resources:

  • EC2 instances with 3 years of configuration drift
  • S3 buckets with complex lifecycle policies
  • IAM roles with inline policies (hard to represent in HCL)
  • Security groups referencing each other

Import strategy:

  1. Start with leaf resources (no dependencies)
  2. Import foundational resources (VPCs, subnets)
  3. Import dependent resources (instances, RDS)
  4. Verify each layer before next

Took 2 weeks with 3 engineers. Now those resources are protected by Terraform’s declarative state.

Real-World Import Complexity

IAM Role with Inline Policies:

AWS Console shows inline policy, but Terraform prefers aws_iam_role_policy resource:

resource "aws_iam_role" "app" {
  name = "app-role"
  # ...
}

resource "aws_iam_role_policy" "app" {
  name = "app-policy"
  role = aws_iam_role.app.id
  
  policy = jsonencode({
    # Copy JSON from console
  })
}

# Import both separately
terraform import aws_iam_role.app app-role
terraform import aws_iam_role_policy.app app-role:app-policy

EC2 Instances with EBS Volumes:

Volumes attached to instances aren’t part of aws_instance resource:

resource "aws_instance" "app" {
  # ...
}

resource "aws_ebs_volume" "app_data" {
  # ...
}

resource "aws_volume_attachment" "app_data" {
  device_name = "/dev/sdf"
  volume_id   = aws_ebs_volume.app_data.id
  instance_id = aws_instance.app.id
}

# Three separate imports required

Automation Tools

Consider tools for large-scale imports:

  • Terraformer - generates HCL from existing infrastructure
  • Former2 - browser extension for AWS console
  • aws2tf - shell script for AWS resource import

These tools aren’t perfect but provide starting HCL to refine.

Next Steps

Tutorial 06: Data Sources vs Resources - Learn when to query existing infrastructure vs manage it.

Cleanup

Destroy imported resources:

terraform destroy

Or remove from Terraform management without destroying:

terraform state rm aws_s3_bucket.legacy
terraform state rm aws_s3_bucket_versioning.legacy
terraform state rm aws_security_group.legacy_web

# Resources still exist in AWS, just not managed by Terraform

Additional Resources

Keywords

terraform import infrastructure adoption brownfield terraform state management terraform state commands

Need Help Implementing This?

I help government contractors and defense organizations modernize their infrastructure using Terraform and AWS GovCloud. With 15+ years managing DoD systems and active Secret clearance, I understand compliance requirements that commercial consultants miss.