home / skills / hashicorp / agent-skills / terraform-stacks
This skill helps you create, validate, and manage Terraform Stacks configurations across environments and regions using .tfcomponent.hcl and .tfdeploy.hcl.
npx playbooks add skill hashicorp/agent-skills --skill terraform-stacksReview the files below or copy the command above to add this skill to your agents.
---
name: terraform-stacks
description: Comprehensive guide for working with HashiCorp Terraform Stacks. Use when creating, modifying, or validating Terraform Stack configurations (.tfcomponent.hcl, .tfdeploy.hcl files), working with stack components and deployments from local modules, public registry, or private registry sources, managing multi-region or multi-environment infrastructure, or troubleshooting Terraform Stacks syntax and structure.
metadata:
copyright: Copyright IBM Corp. 2026
version: "0.0.1"
---
# Terraform Stacks
Terraform Stacks simplify infrastructure provisioning and management at scale by providing a configuration layer above traditional Terraform modules. Stacks enable declarative orchestration of multiple components across environments, regions, and cloud accounts.
## Core Concepts
**Stack**: A complete unit of infrastructure composed of components and deployments that can be managed together.
**Component**: An abstraction around a Terraform module that defines infrastructure pieces. Each component specifies a source module, inputs, and providers.
**Deployment**: An instance of all components in a stack with specific input values. Use deployments for different environments (dev/staging/prod), regions, or cloud accounts.
**Stack Language**: A separate HCL-based language (not regular Terraform HCL) with distinct blocks and file extensions.
## File Structure
Terraform Stacks use specific file extensions:
- **Component configuration**: `.tfcomponent.hcl`
- **Deployment configuration**: `.tfdeploy.hcl`
- **Provider lock file**: `.terraform.lock.hcl` (generated by CLI)
All configuration files must be at the root level of the Stack repository. HCP Terraform processes all files in dependency order.
### Recommended File Organization
```
my-stack/
├── variables.tfcomponent.hcl # Variable declarations
├── providers.tfcomponent.hcl # Provider configurations
├── components.tfcomponent.hcl # Component definitions
├── outputs.tfcomponent.hcl # Stack outputs
├── deployments.tfdeploy.hcl # Deployment definitions
├── .terraform.lock.hcl # Provider lock file (generated)
└── modules/ # Local modules (optional - only if using local modules)
├── vpc/
└── compute/
```
**Note**: The `modules/` directory is only required when using local module sources. Components can reference modules from:
- Local file paths: `./modules/vpc`
- Public registry: `terraform-aws-modules/vpc/aws`
- Private registry: `app.terraform.io/<org-name>/vpc/aws`
When validating Stack configurations, check component source declarations rather than assuming a local `modules/` directory must exist.
## Component Configuration (.tfcomponent.hcl)
### Variable Block
Declare input variables for the Stack configuration. Variables must define a `type` field and do not support the `validation` argument.
```hcl
variable "aws_region" {
type = string
description = "AWS region for deployments"
default = "us-west-1"
}
variable "identity_token" {
type = string
description = "OIDC identity token"
ephemeral = true # Does not persist to state file
}
variable "instance_count" {
type = number
nullable = false
}
```
### Required Providers Block
Works the same as traditional Terraform configurations:
```hcl
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.7.0"
}
random = {
source = "hashicorp/random"
version = "~> 3.5.0"
}
}
```
### Provider Block
Provider blocks differ from traditional Terraform:
1. Support `for_each` meta-argument
2. Define aliases in the block header (not as an argument)
3. Accept configuration through a `config` block
**Single Provider Configuration:**
```hcl
provider "aws" "this" {
config {
region = var.aws_region
assume_role_with_web_identity {
role_arn = var.role_arn
web_identity_token = var.identity_token
}
}
}
```
**Multiple Provider Configurations with for_each:**
```hcl
provider "aws" "configurations" {
for_each = var.regions
config {
region = each.value
assume_role_with_web_identity {
role_arn = var.role_arn
web_identity_token = var.identity_token
}
}
}
```
### Component Block
Each Stack requires at least one component block. Add a component for each module to include in the Stack.
**Component Source**: Each component's `source` argument must specify one of the following source types:
- Local file path: `./modules/vpc`
- Public registry: `terraform-aws-modules/vpc/aws`
- Private registry: `app.terraform.io/my-org/vpc/aws`
- Git repository: `git::https://github.com/org/repo.git//modules/vpc?ref=v1.0.0`
```hcl
component "vpc" {
source = "./modules/vpc"
inputs = {
cidr_block = var.vpc_cidr
name_prefix = var.name_prefix
}
providers = {
aws = provider.aws.this
}
}
component "networking" {
source = "app.terraform.io/my-org/vpc/aws"
version = "2.1.0"
inputs = {
cidr_block = var.vpc_cidr
environment = var.environment
}
providers = {
aws = provider.aws.this
}
}
component "compute" {
source = "./modules/compute"
inputs = {
vpc_id = component.vpc.vpc_id
subnet_ids = component.vpc.private_subnet_ids
instance_type = var.instance_type
}
providers = {
aws = provider.aws.this
}
}
```
**Component with for_each for Multi-Region:**
```hcl
component "s3" {
for_each = var.regions
source = "./modules/s3"
inputs = {
region = each.value
tags = var.common_tags
}
providers = {
aws = provider.aws.configurations[each.value]
}
}
```
**Key Points:**
- Reference component outputs using `component.<name>.<output>`
- All inputs are provided as a single `inputs` object
- Provider references are normal values: `provider.<type>.<alias>`
- Dependencies are automatically inferred from component references
### Output Block
Outputs require a `type` argument and do not support `preconditions`:
```hcl
output "vpc_id" {
type = string
description = "VPC ID"
value = component.vpc.vpc_id
}
output "endpoint_urls" {
type = map(string)
value = {
for region, comp in component.api : region => comp.endpoint_url
}
sensitive = false
}
```
### Locals Block
Works exactly as in traditional Terraform:
```hcl
locals {
common_tags = {
Environment = var.environment
ManagedBy = "Terraform Stacks"
Project = var.project_name
}
region_config = {
for region in var.regions : region => {
name_suffix = "${var.environment}-${region}"
}
}
}
```
### Removed Block
Use to safely remove components from a Stack. HCP Terraform requires the component's providers to remove it.
```hcl
removed {
from = component.old_component
source = "./modules/old-module"
providers = {
aws = provider.aws.this
}
}
```
## Deployment Configuration (.tfdeploy.hcl)
### Identity Token Block
Generate JWT tokens for OIDC authentication with cloud providers:
```hcl
identity_token "aws" {
audience = ["aws.workload.identity"]
}
identity_token "azure" {
audience = ["api://AzureADTokenExchange"]
}
```
Reference tokens in deployments using `identity_token.<name>.jwt`
### Locals Block
Define local values for deployment configuration:
```hcl
locals {
aws_regions = ["us-west-1", "us-east-1", "eu-west-1"]
role_arn = "arn:aws:iam::123456789012:role/hcp-terraform-stacks"
}
```
### Deployment Block
Define deployment instances. Each Stack requires at least one deployment (maximum 20 per Stack).
**Single Environment Deployment:**
```hcl
deployment "production" {
inputs = {
aws_region = "us-west-1"
instance_count = 3
role_arn = local.role_arn
identity_token = identity_token.aws.jwt
}
}
```
**Multiple Environment Deployments:**
```hcl
deployment "development" {
inputs = {
aws_region = "us-east-1"
instance_count = 1
name_suffix = "dev"
role_arn = local.role_arn
identity_token = identity_token.aws.jwt
}
}
deployment "staging" {
inputs = {
aws_region = "us-east-1"
instance_count = 2
name_suffix = "staging"
role_arn = local.role_arn
identity_token = identity_token.aws.jwt
}
}
deployment "production" {
inputs = {
aws_region = "us-west-1"
instance_count = 5
name_suffix = "prod"
role_arn = local.role_arn
identity_token = identity_token.aws.jwt
}
}
```
**Destroying a Deployment:**
To safely remove a deployment:
```hcl
deployment "old_environment" {
inputs = {
aws_region = "us-west-1"
instance_count = 2
role_arn = local.role_arn
identity_token = identity_token.aws.jwt
}
destroy = true # Mark for destruction
}
```
After applying the plan and the deployment is destroyed, remove the deployment block from your configuration.
### Deployment Group Block
Group deployments together to configure shared settings (Premium feature). **Best Practice**: Always create deployment groups for all deployments, even single deployments, to enable future auto-approval rules and maintain consistent configuration patterns.
```hcl
deployment_group "canary" {
deployments = [
deployment.dev,
deployment.staging
]
}
deployment_group "production" {
deployments = [
deployment.prod_us_east,
deployment.prod_us_west
]
}
```
### Deployment Auto-Approve Block
Define rules that automatically approve deployment plans based on specific conditions (Premium feature):
```hcl
deployment_auto_approve "safe_changes" {
deployment_group = deployment_group.canary
check {
condition = context.plan.changes.remove == 0
reason = "Cannot auto-approve plans with resource deletions"
}
check {
condition = context.plan.applyable
reason = "Plan must be applyable"
}
}
deployment_auto_approve "applyable_only" {
deployment_group = deployment_group.production
check {
condition = context.plan.applyable
reason = "Plan must be successful"
}
}
```
**Available Context Variables:**
- `context.plan.applyable` - Plan succeeded without errors
- `context.plan.changes.add` - Number of resources to add
- `context.plan.changes.change` - Number of resources to change
- `context.plan.changes.remove` - Number of resources to remove
**Note:** `orchestrate` blocks are deprecated. Use `deployment_group` and `deployment_auto_approve` instead.
### Publish Output Block
Export outputs from a Stack for use in other Stacks (linked Stacks):
```hcl
publish_output "vpc_id_network" {
type = string
value = deployment.network.vpc_id
}
publish_output "subnet_ids" {
type = list(string)
value = deployment.network.private_subnet_ids
}
```
### Upstream Input Block
Reference published outputs from another Stack:
```hcl
upstream_input "network_stack" {
type = "stack"
source = "app.terraform.io/my-org/my-project/networking-stack"
}
deployment "application" {
inputs = {
vpc_id = upstream_input.network_stack.vpc_id_network
subnet_ids = upstream_input.network_stack.subnet_ids
}
}
```
## Terraform Stacks CLI
### Initialize and Validate
Generate provider lock file:
```bash
terraform stacks providers-lock
```
Validate Stack configuration:
```bash
terraform stacks validate
```
### Plan and Apply
Plan a specific deployment:
```bash
terraform stacks plan --deployment=production
```
Apply a deployment:
```bash
terraform stacks apply --deployment=production
```
## Common Patterns
### Multi-Region Deployment
```hcl
# variables.tfcomponent.hcl
variable "regions" {
type = set(string)
default = ["us-west-1", "us-east-1", "eu-west-1"]
}
# providers.tfcomponent.hcl
provider "aws" "regional" {
for_each = var.regions
config {
region = each.value
assume_role_with_web_identity {
role_arn = var.role_arn
web_identity_token = var.identity_token
}
}
}
# components.tfcomponent.hcl
component "regional_infra" {
for_each = var.regions
source = "./modules/regional"
inputs = {
region = each.value
}
providers = {
aws = provider.aws.regional[each.value]
}
}
```
### Component Dependencies
Dependencies are automatically inferred when one component references another's output:
```hcl
component "database" {
source = "./modules/rds"
inputs = {
subnet_ids = component.vpc.private_subnet_ids # Creates dependency
}
providers = {
aws = provider.aws.this
}
}
```
## Best Practices
1. **Component Granularity**: Create components for logical infrastructure units that share a lifecycle
2. **Module Compatibility**: Modules used with Stacks cannot include provider blocks (configure providers in Stack configuration)
3. **State Isolation**: Each deployment has its own isolated state
4. **Input Variables**: Use variables for values that differ across deployments; use locals for shared values
5. **Provider Lock Files**: Always generate and commit `.terraform.lock.hcl` to version control
6. **Naming Conventions**: Use descriptive names for components and deployments
7. **Deployment Groups**: Always organize deployments into deployment groups, even if you only have one deployment. Deployment groups enable auto-approval rules, logical organization, and provide a foundation for scaling. While deployment groups are a Premium feature, organizing your configurations to use them is a best practice for all Stacks
8. **Testing**: Test Stack configurations in dev/staging deployments before production
## Troubleshooting
### Circular Dependencies
**Issue**: Component A references Component B, and Component B references Component A
**Solution**: Refactor to break the circular reference or use intermediate components
### Deployment Limit
HCP Terraform supports maximum 20 deployments per Stack. For more instances, use multiple Stacks or `for_each` within components.
## References
For detailed block specifications and advanced features, see:
- `references/component-blocks.md` - Complete component block reference
- `references/deployment-blocks.md` - Complete deployment block reference
- `references/examples.md` - Complete working examples for common scenarios
This skill is a practical guide for creating, modifying, and validating HashiCorp Terraform Stacks. It explains stack-level constructs, file organization, component and deployment configuration, and CLI commands to plan and apply deployments. Use it to manage multi-region or multi-environment infrastructure and troubleshoot common Stack issues.
The skill inspects Stack language files (.tfcomponent.hcl and .tfdeploy.hcl) and explains how components, providers, and deployments relate. It walks through variable, provider, component, output, locals, removed, identity_token, deployment_group, and publish/upstream blocks, and shows how to use the Terraform Stacks CLI for locking providers, validating, planning, and applying. It emphasizes source types for components (local, public registry, private registry, git) and deployment isolation.
Can I use local modules and registry modules in the same Stack?
Yes. Components can reference local paths (./modules/...), public registry names, private registry addresses, or git sources. The modules/ directory is only required when local sources are used.
How many deployments can a Stack have?
A Stack supports up to 20 deployments. For more instances, split into multiple Stacks or use for_each patterns within components.