home / skills / sstobo / convex-skills / convex-agents-fundamentals

convex-agents-fundamentals skill

/convex-agents-fundamentals

This skill guides you through initializing Convex agents, managing threads, and generating LLM responses for chat-based interactions.

npx playbooks add skill sstobo/convex-skills --skill convex-agents-fundamentals

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
3.5 KB
---
name: "Convex Agents Fundamentals"
description: "Sets up and configures Convex agents for chat-based AI interactions. Use this when initializing agent instances, creating conversation threads, and generating basic text or structured responses from LLMs. Essential foundation for any Convex agent implementation."
---

## Purpose

Guides you through the core patterns for setting up Convex agents, managing conversation threads, and generating LLM responses. This is the foundation upon which all other agent capabilities build.

## When to Use This Skill

- Setting up your first Convex agent in a project
- Creating or continuing conversation threads with users
- Generating text responses or structured JSON objects from an LLM
- Configuring agent defaults (system prompt, chat model, embedding model)
- Building basic chat interfaces that need message history

## How to Use It

### 1. Install and Configure

Add the agent component to your `convex.config.ts`:

```typescript
// convex/convex.config.ts
import { defineApp } from "convex/server";
import agent from "@convex-dev/agent/convex.config";

const app = defineApp();
app.use(agent);

export default app;
```

Run `npx convex dev` to generate the required code.

### 2. Define Your Agent

Create an agent instance with a chat model:

```typescript
// convex/agents/myAgent.ts
import { components } from "../_generated/api";
import { Agent } from "@convex-dev/agent";
import { openai } from "@ai-sdk/openai";

export const myAgent = new Agent(components.agent, {
  name: "My Assistant",
  languageModel: openai.chat("gpt-4o-mini"),
  instructions: "You are a helpful assistant.", // Optional: default system prompt
});
```

### 3. Create Threads

Create a thread for a user to manage their conversation history:

```typescript
// convex/threads.ts
import { action } from "../_generated/server";
import { v } from "convex/values";
import { myAgent } from "./agents/myAgent";

export const createNewThread = action({
  args: { userId: v.string() },
  handler: async (ctx, { userId }) => {
    const { thread } = await myAgent.createThread(ctx, {
      userId,
      title: "New Conversation",
    });
    return { threadId: thread.getMetadata().threadId };
  },
});
```

### 4. Generate Responses

Generate text or structured responses in a thread:

```typescript
// convex/generation.ts
export const generateReply = action({
  args: { threadId: v.string(), prompt: v.string() },
  handler: async (ctx, { threadId, prompt }) => {
    const { thread } = await myAgent.continueThread(ctx, { threadId });
    const result = await thread.generateText({ prompt });
    return result.text;
  },
});
```

## Key Principles

- **Thread isolation**: Each user/conversation gets its own thread for independent history
- **Automatic message storage**: Generated responses are automatically saved to the thread
- **Context by default**: Each generation includes recent message history automatically
- **Async-friendly**: Use actions for LLM calls; mutations for transactional writes
- **Type safety**: Always provide explicit return types to avoid circular reference errors

## Common Patterns

- **Per-user organization**: Always include `userId` when creating threads
- **Message history**: Automatically included in LLM context
- **Thread reuse**: Same thread can be used by multiple agents

## Next Steps

- **Manage threads**: See **threads** skill for conversation management
- **Work with messages**: See **messages** skill for saving and retrieving
- **Add tools**: See **tools** skill to let agents take actions

Overview

This skill sets up and configures Convex agents for chat-based AI interactions, providing the foundational patterns for creating agent instances, threads, and basic LLM responses. It focuses on initializing agents, managing per-user conversation threads, and producing text or structured outputs that are stored and reused. Use it as the baseline for any Convex agent implementation.

How this skill works

Install the agent component into your Convex app and instantiate Agent objects with a chat model and optional default instructions. Create per-user threads to isolate conversation history, then call generation methods on those threads to produce text or structured JSON. Generated messages are stored automatically and recent message history is included in each LLM call, while actions are used for async LLM calls and mutations for transactional writes.

When to use it

  • Bootstrapping your first Convex agent in a project
  • Creating or continuing per-user conversation threads
  • Generating text replies or structured JSON from a chat model
  • Configuring agent defaults like system prompt and model selection
  • Building simple chat interfaces that rely on message history

Best practices

  • Create one thread per user or conversation to ensure thread isolation
  • Include userId when creating threads to keep per-user organization
  • Use actions for any LLM calls to keep async behavior safe in Convex
  • Provide explicit return types on server code to maintain type safety
  • Rely on automatic message storage and recent history rather than re-sending full context

Example use cases

  • Initialize a My Assistant agent with a specific chat model and default instructions
  • Create a new conversation thread when a user starts a chat and return the threadId
  • Continue an existing thread and call thread.generateText({ prompt }) to get a reply
  • Produce structured JSON responses (e.g., form data) from the model and store them in the thread
  • Build a chat UI that reads thread history and calls thread.generateText for new messages

FAQ

How do I keep conversation history private per user?

Create a thread per user using their userId; each thread stores its own messages so histories remain isolated.

Should I call the model directly from client code?

No. Use Convex actions for LLM calls so async operations and credentials remain secure on the server side.