Terraform State Commands Cheat Sheet
The day Terraform and reality disagree, you reach for these. Back up state first, these commands edit the source of truth.
Backup first, every time
State is the source of truth. Touch it without a backup and you may end up re-creating production resources. There is no Ctrl-Z.
terraform state pull > state-$(date +%Y%m%d-%H%M%S).tfstate, local snapshot before anything destructiveterraform state push state-2026-04-13-1530.tfstate, restore from snapshot if you trash it- S3 backend? Versioning is your friend, turn it on, set MFA delete
- Always run on a clean working tree (
git statusempty) so you cangit diffthe implied plan - Use a workspace lock, concurrent state edits corrupt state
Inspect
Most "what's in state?" questions answer themselves with these.
terraform state list, every resource address in stateterraform state list module.api, narrowed to a moduleterraform state list 'aws_instance.*', narrowed by type (quote the glob)terraform state show aws_instance.web, full attributes for one resourceterraform show -json | jq, full state as JSON for grep/jq pipelinesterraform providers, what providers are pinned and at what versionsterraform graph | dot -Tsvg > graph.svg, dependency graph
Move and rename
You renamed a module or resource in HCL and now Terraform wants to destroy/recreate. Don't let it. Move state to match.
terraform state mv aws_instance.web aws_instance.api, rename a resourceterraform state mv module.web module.api, rename a moduleterraform state mv 'aws_instance.web[0]' 'aws_instance.web["primary"]', count to for_each migrationterraform state mv -state-out=../other/terraform.tfstate aws_db_instance.users aws_db_instance.users, move a resource between state files- Then run
terraform plan, should be a no-op. If it isn't, revert and investigate
Remove
Forgets a resource without destroying it. The cloud resource keeps running; Terraform just stops managing it.
terraform state rm aws_instance.legacy, remove single resourceterraform state rm 'module.deprecated.*', remove an entire module- After
state rm, the cloud resource still exists. You're now responsible for it manually, or you re-import it later - Use case: handing a resource to another team's Terraform repo, or untangling shared resources
Import
Bring an existing cloud resource under Terraform management. The HCL must already exist; import only updates state.
terraform import aws_instance.web i-0abc123, classic syntax, single resourceterraform import 'aws_security_group_rule.allow["http"]' sg-0123_ingress_tcp_80_80_0.0.0.0/0, for_each addressterraform import 'module.api.aws_instance.web' i-0abc123, through a module- 1.5+: declarative
import {}block withid =andto =, lets you commit imports to source control and runterraform planto preview - Always run
terraform planafter import, drift between cloud and HCL shows up here. Reconcile beforeapply
Taint and refresh
Force a resource to be recreated, or sync state to current cloud reality.
terraform apply -replace=aws_instance.web, modern way to force recreation (taint is deprecated)terraform apply -replace='module.api.aws_instance.web', through a moduleterraform untaint aws_instance.web, clear a tainted mark from older Terraform versionsterraform apply -refresh-only, sync state to current cloud state, no resource changesterraform plan -refresh-only, preview what refresh would change- Refresh-only is the right tool when state thinks an instance has tag X but the cloud says tag Y
Rescue scenarios
The four situations where state surgery actually earns its keep.
- Resource exists in cloud, not in state:
terraform import <address> <cloud_id>, thenplanto verify no drift - Resource in state but deleted in cloud:
terraform state rm <address>, then re-apply if you want it back - Renamed in HCL, plan wants destroy/recreate:
terraform state mv <old> <new> - Split monolith into modules:
terraform state mv <old.address> module.<new>.<address>for each resource; expect a no-op plan after - Migrate count to for_each:
terraform state mv 'res.x[0]' 'res.x["primary"]'for each instance - Two states ended up with the same resource:
state rmfrom one, leave the other; commit; run plan in both to confirm - State file corrupted:
terraform state push <backup>, exactly why you backed up at the start