home / skills / jeremylongshore / claude-code-plugins-plus-skills / langchain-multi-env-setup

This skill guides configuring LangChain across development, staging, and production, enabling seamless environment-specific settings and promotion workflows.

npx playbooks add skill jeremylongshore/claude-code-plugins-plus-skills --skill langchain-multi-env-setup

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
8.6 KB
---
name: langchain-multi-env-setup
description: |
  Configure LangChain multi-environment setup for dev/staging/prod.
  Use when managing multiple environments, configuring environment-specific settings,
  or implementing environment promotion workflows.
  Trigger with phrases like "langchain environments", "langchain staging",
  "langchain dev prod", "environment configuration", "langchain env setup".
allowed-tools: Read, Write, Edit
version: 1.0.0
license: MIT
author: Jeremy Longshore <[email protected]>
---

# LangChain Multi-Environment Setup

## Overview
Configure and manage LangChain applications across development, staging, and production environments.

## Prerequisites
- LangChain application ready for deployment
- Access to multiple deployment environments
- Secrets management solution (e.g., GCP Secret Manager)

## Instructions

### Step 1: Environment Configuration Structure
```
config/
├── base.yaml                # Shared configuration
├── development.yaml         # Development overrides
├── staging.yaml            # Staging overrides
├── production.yaml         # Production overrides
└── settings.py             # Configuration loader
```

### Step 2: Create Base Configuration
```yaml
# config/base.yaml
app:
  name: langchain-app
  version: "1.0.0"

llm:
  max_retries: 3
  request_timeout: 30

logging:
  format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
  date_format: "%Y-%m-%d %H:%M:%S"

cache:
  enabled: true
  ttl_seconds: 3600
```

### Step 3: Environment-Specific Overrides
```yaml
# config/development.yaml
extends: base

app:
  debug: true

llm:
  provider: openai
  model: gpt-4o-mini
  temperature: 0.7
  # Use lower tier in dev

logging:
  level: DEBUG

cache:
  type: memory
  # In-memory cache for local dev

langsmith:
  tracing: true
  project: langchain-dev
```

```yaml
# config/staging.yaml
extends: base

app:
  debug: false

llm:
  provider: openai
  model: gpt-4o-mini
  temperature: 0.5

logging:
  level: INFO

cache:
  type: redis
  url: ${REDIS_URL}  # From environment

langsmith:
  tracing: true
  project: langchain-staging
```

```yaml
# config/production.yaml
extends: base

app:
  debug: false

llm:
  provider: openai
  model: gpt-4o
  temperature: 0.3
  max_retries: 5
  request_timeout: 60

logging:
  level: WARNING

cache:
  type: redis
  url: ${REDIS_URL}

langsmith:
  tracing: true
  project: langchain-production

monitoring:
  prometheus: true
  sentry: true
```

### Step 4: Configuration Loader
```python
# config/settings.py
import os
from pathlib import Path
from typing import Any, Dict, Optional
import yaml
from pydantic import BaseModel, Field
from pydantic_settings import BaseSettings

class LLMConfig(BaseModel):
    provider: str = "openai"
    model: str = "gpt-4o-mini"
    temperature: float = 0.7
    max_retries: int = 3
    request_timeout: int = 30

class CacheConfig(BaseModel):
    enabled: bool = True
    type: str = "memory"
    url: Optional[str] = None
    ttl_seconds: int = 3600

class LangSmithConfig(BaseModel):
    tracing: bool = False
    project: str = "default"

class Settings(BaseSettings):
    environment: str = Field(default="development", env="ENVIRONMENT")
    llm: LLMConfig = Field(default_factory=LLMConfig)
    cache: CacheConfig = Field(default_factory=CacheConfig)
    langsmith: LangSmithConfig = Field(default_factory=LangSmithConfig)

    class Config:
        env_file = ".env"

def load_config(environment: str = None) -> Settings:
    """Load configuration for specified environment."""
    env = environment or os.environ.get("ENVIRONMENT", "development")
    config_dir = Path(__file__).parent

    # Load base config
    config = {}
    base_path = config_dir / "base.yaml"
    if base_path.exists():
        with open(base_path) as f:
            config = yaml.safe_load(f) or {}

    # Load environment-specific overrides
    env_path = config_dir / f"{env}.yaml"
    if env_path.exists():
        with open(env_path) as f:
            env_config = yaml.safe_load(f) or {}
            config = deep_merge(config, env_config)

    # Resolve environment variables
    config = resolve_env_vars(config)

    return Settings(**config)

def deep_merge(base: Dict, override: Dict) -> Dict:
    """Deep merge two dictionaries."""
    result = base.copy()
    for key, value in override.items():
        if key in result and isinstance(result[key], dict) and isinstance(value, dict):
            result[key] = deep_merge(result[key], value)
        else:
            result[key] = value
    return result

def resolve_env_vars(config: Any) -> Any:
    """Resolve ${VAR} patterns in config values."""
    if isinstance(config, dict):
        return {k: resolve_env_vars(v) for k, v in config.items()}
    elif isinstance(config, list):
        return [resolve_env_vars(v) for v in config]
    elif isinstance(config, str) and config.startswith("${") and config.endswith("}"):
        var_name = config[2:-1]
        return os.environ.get(var_name, "")
    return config

# Global settings instance
settings = load_config()
```

### Step 5: Environment-Aware LLM Factory
```python
# infrastructure/llm_factory.py
from config.settings import settings
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from langchain_core.language_models import BaseChatModel

def create_llm() -> BaseChatModel:
    """Create LLM based on environment configuration."""
    llm_config = settings.llm

    if llm_config.provider == "openai":
        return ChatOpenAI(
            model=llm_config.model,
            temperature=llm_config.temperature,
            max_retries=llm_config.max_retries,
            request_timeout=llm_config.request_timeout,
        )
    elif llm_config.provider == "anthropic":
        return ChatAnthropic(
            model=llm_config.model,
            temperature=llm_config.temperature,
            max_retries=llm_config.max_retries,
        )
    else:
        raise ValueError(f"Unknown provider: {llm_config.provider}")
```

### Step 6: Environment-Specific Secrets
```python
# infrastructure/secrets.py
import os
from google.cloud import secretmanager

def get_secret(secret_id: str) -> str:
    """Get secret from appropriate source based on environment."""
    env = os.environ.get("ENVIRONMENT", "development")

    if env == "development":
        # Use local .env file in development
        return os.environ.get(secret_id, "")

    else:
        # Use Secret Manager in staging/production
        client = secretmanager.SecretManagerServiceClient()
        project_id = os.environ.get("GCP_PROJECT")
        name = f"projects/{project_id}/secrets/{secret_id}/versions/latest"
        response = client.access_secret_version(request={"name": name})
        return response.payload.data.decode("UTF-8")

# Usage
openai_key = get_secret("OPENAI_API_KEY")
```

### Step 7: Docker Compose for Local Environments
```yaml
# docker-compose.yml
version: '3.8'

services:
  app:
    build: .
    environment:
      - ENVIRONMENT=development
      - OPENAI_API_KEY=${OPENAI_API_KEY}
    ports:
      - "8080:8080"
    volumes:
      - ./src:/app/src
    depends_on:
      - redis

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"

  # Local development tools
  langsmith-proxy:
    image: langchain/langsmith-proxy:latest
    environment:
      - LANGCHAIN_API_KEY=${LANGCHAIN_API_KEY}
```

## Output
- Multi-environment configuration structure
- Environment-aware configuration loader
- Secrets management per environment
- Docker Compose for local development

## Examples

### Running Different Environments
```bash
# Development
ENVIRONMENT=development python main.py

# Staging
ENVIRONMENT=staging python main.py

# Production
ENVIRONMENT=production python main.py
```

### Environment Promotion Workflow
```bash
# 1. Test in development
ENVIRONMENT=development pytest tests/

# 2. Deploy to staging
gcloud run deploy langchain-api-staging \
    --set-env-vars="ENVIRONMENT=staging"

# 3. Run integration tests
ENVIRONMENT=staging pytest tests/integration/

# 4. Deploy to production
gcloud run deploy langchain-api \
    --set-env-vars="ENVIRONMENT=production"
```

## Error Handling
| Error | Cause | Solution |
|-------|-------|----------|
| Config Not Found | Missing YAML file | Ensure environment file exists |
| Secret Missing | Not in Secret Manager | Add secret for environment |
| Env Var Not Set | Missing .env | Create .env from .env.example |

## Resources
- [Pydantic Settings](https://docs.pydantic.dev/latest/concepts/pydantic_settings/)
- [GCP Secret Manager](https://cloud.google.com/secret-manager/docs)
- [12-Factor App Config](https://12factor.net/config)

## Next Steps
Use `langchain-observability` for environment-specific monitoring.

Overview

This skill configures and manages LangChain applications across development, staging, and production environments. It provides a structured config layout, an environment-aware loader, secrets handling, and an LLM factory to switch providers per environment. The goal is predictable promotion workflows and consistent runtime behavior across environments.

How this skill works

It uses a config directory with a shared base.yaml and environment-specific overrides (development/staging/production). A Python loader merges base and override files, resolves ${VAR} entries from the environment, and returns a Pydantic Settings instance. The LLM factory and secrets helper read those settings to instantiate providers and fetch secrets from .env in development or GCP Secret Manager in staging/production.

When to use it

  • You have a LangChain app that must run in multiple environments (dev/staging/prod).
  • You need environment-specific LLM settings (model, temperature, timeouts).
  • You want an environment-aware secrets strategy (local .env vs secret manager).
  • You need a predictable promotion workflow and repeatable deployments.
  • You want local Docker Compose for development with redis and local tooling.

Best practices

  • Keep shared defaults in config/base.yaml and make minimal environment overrides.
  • Store only non-sensitive defaults in repo; load secrets from environment or secret manager.
  • Use the provided deep_merge and env var resolver to keep YAML simple and DRY.
  • Pin production LLM timeouts and retry values higher than dev to increase resilience.
  • Run integration tests in staging before production deployment and use CI to set ENVIRONMENT and secrets.

Example use cases

  • Local development with fast, low-cost models and in-memory cache via docker-compose.
  • Staging runs with Redis cache, tracing enabled, and shared LangSmith project for debugging.
  • Production runs with sturdier LLM settings, Redis, Prometheus and Sentry enabled for monitoring.
  • Automated promotion: run unit tests in development, deploy to staging and run integration tests, then deploy to production.
  • Secrets access: read OPENAI key from .env locally, use GCP Secret Manager in staging/production.

FAQ

How do I add a new environment like 'qa'?

Create config/qa.yaml that extends base, add overrides, and set ENVIRONMENT=qa when starting the app.

What if an environment variable referenced in YAML is missing?

resolve_env_vars returns an empty string if the env var is missing; ensure required vars are set or validate after loading.