Tutorial 02: The Plan-Apply Contract
Master predicting whether Terraform will update resources in-place or destroy and recreate them
Prerequisites
Complete these tutorials first: Tutorial 01: State is Truth
Brutal Truth Up Front
terraform plan is not a suggestion - it’s a contract. If plan says it will destroy and recreate your database, believe it. Don’t assume “apply will figure it out.” It won’t.
Most production outages from Terraform happen because someone didn’t read the plan output carefully. A single # forces replacement comment means downtime unless you explicitly prevent it.
Prerequisites
- Completed Tutorial 01
- AWS account configured
- Understanding that state tracks current infrastructure
What You’ll Build
An EC2 instance that you’ll modify in different ways. Some changes will update in-place. Others will force destroy-recreate cycles. You’ll learn to predict which is which before running apply.
The Exercise
Step 1: Create a Basic EC2 Instance
Create main.tf:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"] # Canonical
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"]
}
}
resource "aws_instance" "example" {
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
tags = {
Name = "terraform-tutorial"
}
}
output "instance_id" {
value = aws_instance.example.id
}
output "public_ip" {
value = aws_instance.example.public_ip
}
Initialize and apply:
terraform init
terraform apply
Note the instance ID in the output. You’ll use this to verify behavior.
Step 2: Safe Changes (Update In-Place)
Modify the tags:
resource "aws_instance" "example" {
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
tags = {
Name = "terraform-tutorial"
Environment = "dev"
Owner = "larue"
}
}
Run plan:
terraform plan
Look for the ~ symbol. This means “update in-place”. The output shows:
~ resource "aws_instance" "example" {
~ tags = {
+ "Environment" = "dev"
+ "Owner" = "larue"
"Name" = "terraform-tutorial"
}
# (29 unchanged attributes hidden)
}
Apply this change:
terraform apply
Check the instance ID in output - it’s the same. The instance was modified, not replaced.
Step 3: Dangerous Changes (Forces Replacement)
Now change the AMI to a different Ubuntu version:
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"]
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"] # Changed to 20.04
}
}
Run plan:
terraform plan
Critical output:
-/+ resource "aws_instance" "example" {
~ ami = "ami-0c55b159cbfafe1f0" -> "ami-0729e439b6769d6ab" # forces replacement
- instance_state = "running" -> null
- public_ip = "3.80.123.45" -> null
The -/+ symbol and # forces replacement comment mean Terraform will:
- Create a new instance with the new AMI
- Destroy the old instance
This means data loss if you had anything stored locally, IP address changes, and temporary unavailability.
Don’t apply this yet. This is the danger zone.
Step 4: Understanding AWS’s Immutable Attributes
In AWS, some attributes can’t be modified on existing resources:
- EC2: AMI, instance type (for older instance types), VPC/subnet placement
- RDS: Engine version (sometimes), allocated storage (can only increase)
- Security Groups: VPC ID
When you change an immutable attribute, AWS forces Terraform to replace the resource. Plan always warns you.
Step 5: Preventing Unwanted Replacement
Add a lifecycle rule to prevent destruction:
resource "aws_instance" "example" {
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
tags = {
Name = "terraform-tutorial"
Environment = "dev"
Owner = "larue"
}
lifecycle {
prevent_destroy = true
}
}
Now try to apply the AMI change:
terraform apply
Terraform errors out: Error: Instance cannot be destroyed. This is good. It prevents accidents.
To actually make this change in production, you’d:
- Create a new instance with the new AMI
- Migrate data/traffic
- Remove lifecycle protection
- Destroy old instance
Or use create_before_destroy:
lifecycle {
create_before_destroy = true
}
This changes the order: create new → migrate → destroy old. Still causes brief overlap with two instances running.
The Break (Intentional Failure Scenarios)
Scenario 1: Ignore Plan Warnings
Remove the lifecycle block. Change instance type to something that requires replacement on older generation types:
resource "aws_instance" "example" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro" # Different instance family
tags = {
Name = "terraform-tutorial"
}
}
Run plan - it may or may not force replacement depending on the specific AMI and instance family compatibility. The point: always read plan output.
Some developers just run terraform apply -auto-approve in CI/CD. This is criminally negligent in production.
Scenario 2: Concurrent Modifications
Open two terminal windows to the same directory. In both:
terraform apply
One will succeed. The other will fail with a state lock error (if you had remote state with locking). Without locking, you get state corruption.
This is why Tutorial 04 introduces remote state immediately.
The Recovery
If You Accidentally Replaced a Resource
Check your state file’s backup:
ls -la terraform.tfstate*
You’ll see terraform.tfstate.backup - this is the previous state. If you need to import the old resource back, you can reference it.
In reality, you can’t “undo” a replacement. The old instance is gone. This is why we:
- Always read plan carefully
- Use
prevent_destroyfor critical resources - Test in dev environments first
- Use CI/CD with required manual approval for production applies
Exit Criteria
You understand this tutorial if you can:
- Identify
~,-/+,+, and-symbols in plan output - Predict which attribute changes force replacement vs in-place updates
- Explain why changing an EC2 AMI requires replacement
- Use lifecycle blocks to prevent accidental resource destruction
- Describe the difference between
prevent_destroyandcreate_before_destroy
Key Lessons
- Plan output is a contract - symbols tell you exactly what will happen
-/+and “forces replacement” mean downtime unless handled carefully- AWS attributes have rules - some can be modified, others force new resources
- Lifecycle blocks are safety rails - use them on production databases, storage
- Never use
-auto-approvewithout review in production pipelines
Why This Matters in Production
In DoD environments, resource replacement can mean:
- Loss of compliance evidence if logs weren’t backed up
- Security group changes that break connectivity
- IP address changes requiring firewall rule updates
- Temporary service unavailability during migrations
On my Air Force program, we use prevent_destroy on:
- All RDS databases
- S3 buckets containing audit logs
- KMS keys (can’t be recovered after deletion)
A single missed forces replacement warning destroyed a development database because someone didn’t read plan output. We lost 4 hours restoring from backup.
Real-World Example
Here’s production-grade lifecycle configuration for an RDS database:
resource "aws_db_instance" "production" {
identifier = "production-db"
engine = "postgres"
lifecycle {
prevent_destroy = true
ignore_changes = [
password, # Rotated externally
]
}
}
This prevents accidental destruction and ignores password changes managed by AWS Secrets Manager rotation.
Next Steps
Tutorial 03: Variables and Outputs - Learn how to parameterize configurations to prevent hardcoded values that cause drift across environments.
Cleanup
Remove lifecycle protection first:
resource "aws_instance" "example" {
# Remove lifecycle block
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
tags = {
Name = "terraform-tutorial"
}
}
Then destroy:
terraform apply # Apply the lifecycle change first
terraform destroy
Additional Resources
Keywords
Need Help Implementing This?
I help government contractors and defense organizations modernize their infrastructure using Terraform and AWS GovCloud. With 15+ years managing DoD systems and active Secret clearance, I understand compliance requirements that commercial consultants miss.