home / skills / ehtbanton / claudeskillsrepo / terraform-module-builder
This skill generates production-ready Terraform configurations for AWS, GCP, and Azure with modular structure and best practices.
npx playbooks add skill ehtbanton/claudeskillsrepo --skill terraform-module-builderReview the files below or copy the command above to add this skill to your agents.
---
name: terraform-module-builder
description: Generate Terraform configuration files for infrastructure as code including AWS, GCP, and Azure resources with modules and best practices. Triggers on "create Terraform config", "generate terraform for", "infrastructure as code", "IaC for AWS/GCP/Azure".
---
# Terraform Module Builder
Generate production-ready Terraform configurations for cloud infrastructure provisioning.
## Output Requirements
**File Output:** `.tf` files (main.tf, variables.tf, outputs.tf, etc.)
**Format:** HashiCorp Configuration Language (HCL)
**Standards:** Terraform 1.5+
## When Invoked
Immediately generate complete Terraform configurations. Include variables, outputs, and proper resource naming.
## File Structure
```
terraform/
├── main.tf # Main resource definitions
├── variables.tf # Input variables
├── outputs.tf # Output values
├── providers.tf # Provider configurations
├── versions.tf # Version constraints
├── locals.tf # Local values
├── data.tf # Data sources
└── terraform.tfvars # Variable values (example)
```
## Complete Templates
### AWS ECS Fargate Service
```hcl
# versions.tf
terraform {
required_version = ">= 1.5.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
backend "s3" {
bucket = "my-terraform-state"
key = "ecs/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-locks"
}
}
# providers.tf
provider "aws" {
region = var.aws_region
default_tags {
tags = {
Environment = var.environment
Project = var.project_name
ManagedBy = "Terraform"
}
}
}
# variables.tf
variable "aws_region" {
description = "AWS region"
type = string
default = "us-east-1"
}
variable "environment" {
description = "Environment name"
type = string
validation {
condition = contains(["dev", "staging", "prod"], var.environment)
error_message = "Environment must be dev, staging, or prod."
}
}
variable "project_name" {
description = "Project name"
type = string
}
variable "app_name" {
description = "Application name"
type = string
}
variable "container_image" {
description = "Docker image for the container"
type = string
}
variable "container_port" {
description = "Port the container listens on"
type = number
default = 8080
}
variable "cpu" {
description = "CPU units for the task"
type = number
default = 256
}
variable "memory" {
description = "Memory for the task in MB"
type = number
default = 512
}
variable "desired_count" {
description = "Desired number of tasks"
type = number
default = 2
}
variable "min_capacity" {
description = "Minimum number of tasks"
type = number
default = 1
}
variable "max_capacity" {
description = "Maximum number of tasks"
type = number
default = 10
}
# locals.tf
locals {
name_prefix = "${var.project_name}-${var.environment}"
common_tags = {
Application = var.app_name
}
}
# data.tf
data "aws_availability_zones" "available" {
state = "available"
}
data "aws_caller_identity" "current" {}
# main.tf
# VPC
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"
name = "${local.name_prefix}-vpc"
cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
enable_nat_gateway = true
single_nat_gateway = var.environment != "prod"
enable_dns_hostnames = true
enable_dns_support = true
tags = local.common_tags
}
# ECS Cluster
resource "aws_ecs_cluster" "main" {
name = "${local.name_prefix}-cluster"
setting {
name = "containerInsights"
value = "enabled"
}
tags = local.common_tags
}
resource "aws_ecs_cluster_capacity_providers" "main" {
cluster_name = aws_ecs_cluster.main.name
capacity_providers = ["FARGATE", "FARGATE_SPOT"]
default_capacity_provider_strategy {
base = 1
weight = 100
capacity_provider = "FARGATE"
}
}
# Security Groups
resource "aws_security_group" "alb" {
name = "${local.name_prefix}-alb-sg"
description = "Security group for ALB"
vpc_id = module.vpc.vpc_id
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTPS"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = merge(local.common_tags, {
Name = "${local.name_prefix}-alb-sg"
})
}
resource "aws_security_group" "ecs_tasks" {
name = "${local.name_prefix}-ecs-tasks-sg"
description = "Security group for ECS tasks"
vpc_id = module.vpc.vpc_id
ingress {
description = "Allow traffic from ALB"
from_port = var.container_port
to_port = var.container_port
protocol = "tcp"
security_groups = [aws_security_group.alb.id]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = merge(local.common_tags, {
Name = "${local.name_prefix}-ecs-tasks-sg"
})
}
# Application Load Balancer
resource "aws_lb" "main" {
name = "${local.name_prefix}-alb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.alb.id]
subnets = module.vpc.public_subnets
enable_deletion_protection = var.environment == "prod"
tags = local.common_tags
}
resource "aws_lb_target_group" "main" {
name = "${local.name_prefix}-tg"
port = var.container_port
protocol = "HTTP"
vpc_id = module.vpc.vpc_id
target_type = "ip"
health_check {
enabled = true
healthy_threshold = 2
unhealthy_threshold = 3
timeout = 5
interval = 30
path = "/health"
matcher = "200"
}
tags = local.common_tags
}
resource "aws_lb_listener" "http" {
load_balancer_arn = aws_lb.main.arn
port = 80
protocol = "HTTP"
default_action {
type = "redirect"
redirect {
port = "443"
protocol = "HTTPS"
status_code = "HTTP_301"
}
}
}
# IAM Roles
resource "aws_iam_role" "ecs_task_execution" {
name = "${local.name_prefix}-ecs-task-execution"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
}
]
})
tags = local.common_tags
}
resource "aws_iam_role_policy_attachment" "ecs_task_execution" {
role = aws_iam_role.ecs_task_execution.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}
resource "aws_iam_role" "ecs_task" {
name = "${local.name_prefix}-ecs-task"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
}
]
})
tags = local.common_tags
}
# CloudWatch Log Group
resource "aws_cloudwatch_log_group" "main" {
name = "/ecs/${local.name_prefix}"
retention_in_days = var.environment == "prod" ? 90 : 14
tags = local.common_tags
}
# ECS Task Definition
resource "aws_ecs_task_definition" "main" {
family = "${local.name_prefix}-task"
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = var.cpu
memory = var.memory
execution_role_arn = aws_iam_role.ecs_task_execution.arn
task_role_arn = aws_iam_role.ecs_task.arn
container_definitions = jsonencode([
{
name = var.app_name
image = var.container_image
essential = true
portMappings = [
{
containerPort = var.container_port
hostPort = var.container_port
protocol = "tcp"
}
]
environment = [
{
name = "NODE_ENV"
value = var.environment
},
{
name = "PORT"
value = tostring(var.container_port)
}
]
logConfiguration = {
logDriver = "awslogs"
options = {
"awslogs-group" = aws_cloudwatch_log_group.main.name
"awslogs-region" = var.aws_region
"awslogs-stream-prefix" = "ecs"
}
}
healthCheck = {
command = ["CMD-SHELL", "wget -q -O /dev/null http://localhost:${var.container_port}/health || exit 1"]
interval = 30
timeout = 5
retries = 3
startPeriod = 60
}
}
])
tags = local.common_tags
}
# ECS Service
resource "aws_ecs_service" "main" {
name = "${local.name_prefix}-service"
cluster = aws_ecs_cluster.main.id
task_definition = aws_ecs_task_definition.main.arn
desired_count = var.desired_count
deployment_minimum_healthy_percent = 100
deployment_maximum_percent = 200
launch_type = "FARGATE"
scheduling_strategy = "REPLICA"
platform_version = "LATEST"
network_configuration {
security_groups = [aws_security_group.ecs_tasks.id]
subnets = module.vpc.private_subnets
assign_public_ip = false
}
load_balancer {
target_group_arn = aws_lb_target_group.main.arn
container_name = var.app_name
container_port = var.container_port
}
deployment_circuit_breaker {
enable = true
rollback = true
}
lifecycle {
ignore_changes = [desired_count]
}
tags = local.common_tags
}
# Auto Scaling
resource "aws_appautoscaling_target" "ecs" {
max_capacity = var.max_capacity
min_capacity = var.min_capacity
resource_id = "service/${aws_ecs_cluster.main.name}/${aws_ecs_service.main.name}"
scalable_dimension = "ecs:service:DesiredCount"
service_namespace = "ecs"
}
resource "aws_appautoscaling_policy" "cpu" {
name = "${local.name_prefix}-cpu-autoscaling"
policy_type = "TargetTrackingScaling"
resource_id = aws_appautoscaling_target.ecs.resource_id
scalable_dimension = aws_appautoscaling_target.ecs.scalable_dimension
service_namespace = aws_appautoscaling_target.ecs.service_namespace
target_tracking_scaling_policy_configuration {
predefined_metric_specification {
predefined_metric_type = "ECSServiceAverageCPUUtilization"
}
target_value = 70.0
scale_in_cooldown = 300
scale_out_cooldown = 60
}
}
# outputs.tf
output "vpc_id" {
description = "VPC ID"
value = module.vpc.vpc_id
}
output "cluster_name" {
description = "ECS Cluster name"
value = aws_ecs_cluster.main.name
}
output "service_name" {
description = "ECS Service name"
value = aws_ecs_service.main.name
}
output "alb_dns_name" {
description = "ALB DNS name"
value = aws_lb.main.dns_name
}
output "alb_zone_id" {
description = "ALB Zone ID"
value = aws_lb.main.zone_id
}
output "log_group_name" {
description = "CloudWatch Log Group name"
value = aws_cloudwatch_log_group.main.name
}
```
### AWS S3 + CloudFront Static Site
```hcl
# main.tf - Static Website
# S3 Bucket
resource "aws_s3_bucket" "website" {
bucket = "${var.domain_name}-website"
tags = local.common_tags
}
resource "aws_s3_bucket_public_access_block" "website" {
bucket = aws_s3_bucket.website.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
resource "aws_s3_bucket_versioning" "website" {
bucket = aws_s3_bucket.website.id
versioning_configuration {
status = "Enabled"
}
}
# CloudFront Origin Access Control
resource "aws_cloudfront_origin_access_control" "website" {
name = "${local.name_prefix}-oac"
description = "OAC for ${var.domain_name}"
origin_access_control_origin_type = "s3"
signing_behavior = "always"
signing_protocol = "sigv4"
}
# CloudFront Distribution
resource "aws_cloudfront_distribution" "website" {
enabled = true
is_ipv6_enabled = true
comment = "Distribution for ${var.domain_name}"
default_root_object = "index.html"
price_class = "PriceClass_100"
aliases = [var.domain_name, "www.${var.domain_name}"]
origin {
domain_name = aws_s3_bucket.website.bucket_regional_domain_name
origin_access_control_id = aws_cloudfront_origin_access_control.website.id
origin_id = "S3-${aws_s3_bucket.website.id}"
}
default_cache_behavior {
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "S3-${aws_s3_bucket.website.id}"
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "redirect-to-https"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
compress = true
}
# SPA error handling
custom_error_response {
error_code = 404
response_code = 200
response_page_path = "/index.html"
}
custom_error_response {
error_code = 403
response_code = 200
response_page_path = "/index.html"
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
acm_certificate_arn = var.acm_certificate_arn
ssl_support_method = "sni-only"
minimum_protocol_version = "TLSv1.2_2021"
}
tags = local.common_tags
}
# S3 Bucket Policy
resource "aws_s3_bucket_policy" "website" {
bucket = aws_s3_bucket.website.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "AllowCloudFrontServicePrincipal"
Effect = "Allow"
Principal = {
Service = "cloudfront.amazonaws.com"
}
Action = "s3:GetObject"
Resource = "${aws_s3_bucket.website.arn}/*"
Condition = {
StringEquals = {
"AWS:SourceArn" = aws_cloudfront_distribution.website.arn
}
}
}
]
})
}
```
## Best Practices
### Variable Validation
```hcl
variable "environment" {
type = string
validation {
condition = contains(["dev", "staging", "prod"], var.environment)
error_message = "Environment must be dev, staging, or prod."
}
}
```
### Resource Naming
```hcl
locals {
name_prefix = "${var.project}-${var.environment}"
}
resource "aws_s3_bucket" "example" {
bucket = "${local.name_prefix}-data"
}
```
## Validation Checklist
Before outputting, verify:
- [ ] `required_version` constraint in versions.tf
- [ ] Provider versions pinned
- [ ] Variables have descriptions and types
- [ ] Sensitive variables marked `sensitive = true`
- [ ] Resources have tags
- [ ] Outputs have descriptions
- [ ] Security groups follow least privilege
## Example Invocations
**Prompt:** "Create Terraform config for AWS VPC with public/private subnets"
**Output:** Complete Terraform with VPC, subnets, NAT, route tables.
**Prompt:** "Generate Terraform for RDS PostgreSQL with read replicas"
**Output:** Complete Terraform with RDS, parameter groups, security groups.
**Prompt:** "Terraform module for Kubernetes cluster on GCP"
**Output:** Complete Terraform with GKE cluster, node pools, networking.
This skill generates production-ready Terraform module templates and complete .tf file sets for AWS, GCP, and Azure infrastructure. It produces opinionated, best-practice configurations (providers, versions, variables, outputs, locals, data sources) and ready-to-use module layouts for common patterns like ECS Fargate, S3+CloudFront static sites, and reusable network stacks. Use it to jumpstart IaC while preserving Terraform 1.5+ standards and sensible defaults.
When triggered, the skill emits a complete Terraform directory structure with main.tf, variables.tf, outputs.tf, providers.tf, versions.tf, locals.tf, data.tf and example terraform.tfvars. It scaffolds modules and resources with clear naming, tags, validations, and backend configuration. Templates include autoscaling, logging, IAM roles, security groups, and optional production protections like deletion protection and state locking.
Does the skill output ready-to-apply Terraform files?
Yes. It generates fully-formed .tf files you can review, customize, and run terraform init/plan/apply with Terraform 1.5+ and the indicated provider versions.
Can I customize naming, tags, and defaults?
Absolutely. Templates expose variables for project_name, environment, region, resource sizes, and tags so you can adapt names and defaults before applying.
Does it include remote state and locking?
Templates include backend examples (S3 + DynamoDB for AWS) and recommend remote state and locking; adjust backend settings to match your tooling and accounts.