home / skills / letta-ai / letta-code / adding-models

adding-models skill

/.skills/adding-models

This skill guides you through adding a new LLM model to Letta Code, including model handles, configuration, and CI testing.

npx playbooks add skill letta-ai/letta-code --skill adding-models

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
3.2 KB
---
name: adding-models
description: Guide for adding new LLM models to Letta Code. Use when the user wants to add support for a new model, needs to know valid model handles, or wants to update the model configuration. Covers models.json configuration, CI test matrix, and handle validation.
---

# Adding Models

This skill guides you through adding a new LLM model to Letta Code.

## Quick Reference

**Key files**:
- `src/models.json` - Model definitions (required)
- `.github/workflows/ci.yml` - CI test matrix (optional)
- `src/tools/manager.ts` - Toolset detection logic (rarely needed)

## Workflow

### Step 1: Find Valid Model Handles

Query the Letta API to see available models:

```bash
curl -s https://api.letta.com/v1/models/ | jq '.[] | .handle'
```

Or filter by provider:
```bash
curl -s https://api.letta.com/v1/models/ | jq '.[] | select(.handle | startswith("google_ai/")) | .handle'
```

Common provider prefixes:
- `anthropic/` - Claude models
- `openai/` - GPT models  
- `google_ai/` - Gemini models
- `google_vertex/` - Vertex AI
- `openrouter/` - Various providers

### Step 2: Add to models.json

Add an entry to `src/models.json`:

```json
{
  "id": "model-shortname",
  "handle": "provider/model-name",
  "label": "Human Readable Name",
  "description": "Brief description of the model",
  "isFeatured": true,  // Optional: shows in featured list
  "updateArgs": {
    "context_window": 180000,
    "temperature": 1.0  // Optional: provider-specific settings
  }
}
```

**Field reference**:
- `id`: Short identifier used with `--model` flag (e.g., `gemini-3-flash`)
- `handle`: Full provider/model path from the API (e.g., `google_ai/gemini-3-flash-preview`)
- `label`: Display name in model selector
- `description`: Brief description shown in selector
- `isFeatured`: If true, appears in featured models section
- `updateArgs`: Model-specific configuration (context window, temperature, reasoning settings, etc.)

**Provider prefixes**:
- `anthropic/` - Anthropic (Claude models)
- `openai/` - OpenAI (GPT models)
- `google_ai/` - Google AI (Gemini models)
- `google_vertex/` - Google Vertex AI
- `openrouter/` - OpenRouter (various providers)

### Step 3: Test the Model

Test with headless mode:

```bash
bun run src/index.ts --new --model <model-id> -p "hi, what model are you?"
```

Example:
```bash
bun run src/index.ts --new --model gemini-3-flash -p "hi, what model are you?"
```

### Step 4: Add to CI Test Matrix (Optional)

To include the model in automated testing, add it to `.github/workflows/ci.yml`:

```yaml
# Find the headless job matrix around line 122
model: [gpt-5-minimal, gpt-4.1, sonnet-4.5, gemini-pro, your-new-model, glm-4.6, haiku]
```

## Toolset Detection

Models are automatically assigned toolsets based on provider:
- `openai/*` → `codex` toolset
- `google_ai/*` or `google_vertex/*` → `gemini` toolset
- Others → `default` toolset

This is handled by `isGeminiModel()` and `isOpenAIModel()` in `src/tools/manager.ts`. You typically don't need to modify this unless adding a new provider.

## Common Issues

**"Handle not found" error**: The model handle is incorrect. Run the validation script to see valid handles.

**Model works but wrong toolset**: Check `src/tools/manager.ts` to ensure the provider prefix is recognized.

Overview

This skill guides adding new LLM models to Letta Code, covering model registration, validation, and optional CI inclusion. It explains valid model handles, the models.json entry format, testing commands, and when to adjust toolset detection. Use it to ensure new models are configured, validated, and tested consistently.

How this skill works

It shows how to query the Letta API for valid model handles, what fields belong in src/models.json, and how automated toolset assignment works. It also explains how to run headless tests locally and optionally add the model to the CI test matrix. Finally, it highlights where to check provider prefixes if a model is assigned the wrong toolset.

When to use it

  • Adding support for a new LLM provider or model build/version.
  • Updating model configuration such as context window or default temperature.
  • Validating that a model handle is recognized by the Letta API.
  • Including a model in automated CI tests for regression coverage.
  • Troubleshooting incorrect toolset assignment for a model.

Best practices

  • Query the Letta API for valid handles before editing models.json to avoid handle-not-found errors.
  • Use a clear short id for the id field that matches the --model flag users will pass.
  • Provide a concise label and description so the model shows up helpfully in selectors.
  • Set updateArgs (context_window, temperature) conservatively and document provider-specific settings.
  • Run the headless test command locally after adding the model and before opening a PR.
  • Add the model to the CI matrix only when you want automated coverage across environments.

Example use cases

  • Add gemini-3-flash by querying handles, creating a models.json entry, and running the headless test.
  • Switch a new OpenAI GPT build into the featured list by setting isFeatured to true and updating updateArgs.
  • Validate third-party or OpenRouter model handles before committing to avoid runtime errors.
  • Include a non-default model in CI to ensure end-to-end behavior remains stable across releases.
  • Fix a model assigned the wrong toolset by checking provider prefix detection in tools manager logic.

FAQ

What causes a "handle not found" error?

It means the handle in models.json doesn't match any entry returned by the Letta API; re-run the API query to get the correct provider/model path.

When should I change toolset detection logic?

Only if you add a new provider prefix that should map to a specific toolset; otherwise provider prefixes auto-assign codex, gemini, or default toolsets.