Blog
February 8, 2018 Marie H.

Terraform Workspaces for Multi-Environment Deploys

Terraform Workspaces for Multi-Environment Deploys

Photo by <a href="https://www.pexels.com/@thisisengineering" target="_blank" rel="noopener">ThisIsEngineering</a> on <a href="https://www.pexels.com" target="_blank" rel="noopener">Pexels</a>

The question I get asked most often when helping teams set up Terraform is: "how do I manage dev, staging, and prod without copy-pasting everything?" There are a few answers to this. Workspaces are one of them. Here's when they work, when they don't, and what a real example looks like.

The problem

You've got a Terraform config. It works great. Then someone says "we need a staging environment that's basically the same but smaller." The obvious solution is LET'S AUTOMATE IT — but how you structure the automation matters a lot.

The naive approach is to copy the whole directory and maintain two separate configs. This works right up until you have a bug to fix in the shared config and now you're fixing it in three places. Not great.

What workspaces are

Terraform workspaces let you use the same configuration with separate state files. Each workspace gets its own state, so terraform apply in the staging workspace doesn't know anything about the resources in prod. Mechanically, state files for non-default workspaces live at terraform.tfstate.d/<workspace-name>/terraform.tfstate (or the equivalent path in your remote backend).

$ terraform workspace list
* default

$ terraform workspace new staging
Created and switched to workspace "staging"!

$ terraform workspace new prod
Created and switched to workspace "prod"!

$ terraform workspace list
  default
* prod
  staging

$ terraform workspace select staging
Switched to workspace "staging".

Using workspace names in your config

The magic variable is ${terraform.workspace}. You can use it anywhere you'd use a regular variable. Here's a concrete example — an S3 bucket that gets a workspace-specific name so you're not fighting over the same bucket across environments:

variable "project" {
  default = "myapp"
}

resource "aws_s3_bucket" "app_data" {
  bucket = "${var.project}-${terraform.workspace}-data"
  acl    = "private"

  tags {
    Environment = "${terraform.workspace}"
    Project     = "${var.project}"
  }
}

In staging this creates myapp-staging-data. In prod it creates myapp-prod-data. Clean and automatic.

You can also use the workspace name to look up environment-specific values with a map:

variable "instance_type" {
  default = {
    default = "t2.micro"
    staging = "t2.small"
    prod    = "t2.medium"
  }
}

resource "aws_instance" "app" {
  ami           = "${var.ami}"
  instance_type = "${lookup(var.instance_type, terraform.workspace)}"

  tags {
    Name        = "${var.project}-${terraform.workspace}"
    Environment = "${terraform.workspace}"
  }
}

Run terraform plan in each workspace and you get environment-appropriate instance types without duplicating the resource block.

Remote backend with workspaces

If you're using S3 as a remote backend (you should be), workspaces work transparently. Terraform namespaces the state keys automatically:

terraform {
  backend "s3" {
    bucket = "mycompany-terraform-state"
    key    = "myapp/terraform.tfstate"
    region = "us-east-1"
  }
}

In S3 the state files end up at:
- myapp/terraform.tfstate (default workspace)
- env:/staging/myapp/terraform.tfstate
- env:/prod/myapp/terraform.tfstate

When workspaces work well

Workspaces are a good fit when your environments are genuinely similar — same architecture, mostly the same resources, just different sizes or names. Dev, staging, and prod for a single application where the main differences are instance sizes and bucket names? Workspaces are great for that.

When to use separate state files instead

Here's the opinion part: I don't use workspaces when environments have meaningfully different architectures. If prod has a read replica, a bastion host, and a WAF, and staging has none of those, you're going to end up with a rats' nest of conditionals in your config. At that point, separate directories with separate state files are just cleaner.

Also, if you need strong blast-radius isolation — where running terraform apply in the wrong workspace could wreck production — separate state backends in separate AWS accounts give you a much harder boundary. A workspace is just a state file; there's nothing stopping you from running terraform workspace select prod && terraform destroy if you're having a bad day.

My rule of thumb: workspaces for "same thing, different names/sizes", separate directories for "genuinely different infrastructure."

Checking your current workspace in CI

Worth noting: in CI you need to explicitly select the right workspace before running plan/apply. I always add a step that prints the current workspace so it's obvious in the build log:

$ terraform workspace show
staging

One less thing to debug when something goes wrong.