home / skills / jeremylongshore / claude-code-plugins-plus-skills / vertex-infra-expert

This skill provisions Vertex AI infrastructure with Terraform, enabling automated endpoints, vector search, pipelines, and secure production guardrails.

npx playbooks add skill jeremylongshore/claude-code-plugins-plus-skills --skill vertex-infra-expert

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
2.5 KB
---
name: vertex-infra-expert
description: |
  Execute use when provisioning Vertex AI infrastructure with Terraform. Trigger with phrases like "vertex ai terraform", "deploy gemini terraform", "model garden infrastructure", "vertex ai endpoints terraform", or "vector search terraform". Provisions Model Garden models, Gemini endpoints, vector search indices, ML pipelines, and production AI services with encryption and auto-scaling.
allowed-tools: Read, Write, Edit, Grep, Glob, Bash(terraform:*), Bash(gcloud:*)
version: 1.0.0
author: Jeremy Longshore <[email protected]>
license: MIT
---

# Vertex Infra Expert

## Overview

Provision Vertex AI infrastructure with Terraform (endpoints, deployed models, vector search indices, pipelines) with production guardrails: encryption, autoscaling, IAM least privilege, and operational validation steps. Use this skill to generate a minimal working Terraform baseline and iterate toward enterprise-ready deployments.

## Prerequisites

Before using this skill, ensure:
- Google Cloud project with Vertex AI API enabled
- Terraform 1.0+ installed
- gcloud CLI authenticated with appropriate permissions
- Understanding of Vertex AI services and ML models
- KMS keys created for encryption (if required)
- GCS buckets for model artifacts and embeddings

## Instructions

1. **Define AI Services**: Identify required Vertex AI components (endpoints, vector search, pipelines)
2. **Configure Terraform**: Set up backend and define project variables
3. **Provision Endpoints**: Deploy Gemini or custom model endpoints with auto-scaling
4. **Set Up Vector Search**: Create indices for embeddings with appropriate dimensions
5. **Configure Encryption**: Apply KMS encryption to endpoints and data
6. **Implement Monitoring**: Set up Cloud Monitoring for model performance
7. **Apply IAM Policies**: Grant least privilege access to AI services
8. **Validate Deployment**: Test endpoints and verify model availability

## Output



## Error Handling

See `{baseDir}/references/errors.md` for comprehensive error handling.

## Examples

See `{baseDir}/references/examples.md` for detailed examples.

## Resources

- Vertex AI Terraform: https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/vertex_ai_endpoint
- Vertex AI documentation: https://cloud.google.com/vertex-ai/docs
- Model Garden: https://cloud.google.com/model-garden
- Vector Search guide: https://cloud.google.com/vertex-ai/docs/vector-search
- Terraform examples in {baseDir}/vertex-examples/

Overview

This skill provisions Vertex AI infrastructure using Terraform to create endpoints, deploy Model Garden or Gemini models, build vector search indices, and orchestrate ML pipelines with production guardrails. It focuses on encryption, autoscaling, IAM least privilege, and operational validation to deliver a minimal, secure Terraform baseline you can iterate into enterprise deployments.

How this skill works

The skill inspects declared AI components and generates Terraform manifests and module scaffolding for Vertex AI services: endpoints, deployed models, vector indices, and pipelines. It embeds best-practice settings for KMS encryption, autoscaling policies, monitoring hooks, and IAM bindings, and provides validation steps and test targets to confirm deployments are operational.

When to use it

  • Provision new Vertex AI environments with Terraform
  • Deploy Gemini or Model Garden models into managed endpoints
  • Create vector search indices and embedding ingestion pipelines
  • Add production guardrails (encryption, autoscaling, monitoring) to AI services
  • Bootstrap ML pipelines or production AI services for staging/production

Best practices

  • Start with an isolated project and remote Terraform state backend to enable safe iteration
  • Use KMS keys for model artifact and endpoint encryption; require key rotation policy
  • Define autoscaling policies tuned to expected traffic and include minimum replica settings
  • Apply least-privilege IAM roles scoped to service accounts and Terraform automation
  • Integrate Cloud Monitoring and alerting for latency, error rates, and model drift checks
  • Include validation steps: smoke tests for endpoints, sample inference, and index lookup tests

Example use cases

  • Generate Terraform baseline for a Gemini endpoint with autoscaling and KMS encryption
  • Provision a vector search index, import embeddings to GCS, and wire an ingestion pipeline
  • Deploy a Model Garden model to Vertex AI endpoint with monitoring and alerting configured
  • Create end-to-end ML pipeline resources (training, artifacts, deployment) with IAM least privilege
  • Migrate a staging Vertex AI setup to production by applying hardened Terraform modules and validations

FAQ

Do I need any GCP prerequisites before running the generated Terraform?

Yes. Enable Vertex AI API, prepare a Google Cloud project, authenticate gcloud and Terraform, create required KMS keys and GCS buckets for artifacts.

Will this produce production-ready configurations out of the box?

It provides a secure, minimal baseline with production guardrails (encryption, autoscaling, IAM). Expect to tune resource sizes, autoscaling thresholds, and monitoring rules for your workload.

Can I use the skill to deploy custom container models as well as Model Garden models?

Yes. The Terraform patterns support both Model Garden/Gemini deployments and custom container-based model deployments to Vertex AI endpoints.