home / skills / picahq / skills / pica-mastra

pica-mastra skill

/skills/pica-mastra

This skill helps you integrate PICA with Mastra using MCP, enabling seamless tool calls and streaming for agent workflows.

npx playbooks add skill picahq/skills --skill pica-mastra

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
6.8 KB
---
name: pica-mastra
description: Integrate PICA into an application using Mastra. Use when adding PICA tools to a Mastra agent via @mastra/core and @mastra/mcp, setting up PICA MCP with Mastra, or when the user mentions PICA with Mastra.
---

# PICA MCP Integration with Mastra

PICA provides a unified API platform that connects AI agents to third-party services (CRMs, email, calendars, databases, etc.) through MCP tool calling.

## PICA MCP Server

PICA exposes its capabilities through an MCP server distributed as `@picahq/mcp`. It uses **stdio transport** — it runs as a local subprocess via `npx`.

### MCP Configuration

```json
{
  "mcpServers": {
    "pica": {
      "command": "npx",
      "args": ["@picahq/mcp"],
      "env": {
        "PICA_SECRET": "your-pica-secret-key"
      }
    }
  }
}
```

- **Package**: `@picahq/mcp` (run via `npx`, no install needed)
- **Auth**: `PICA_SECRET` environment variable (obtain from the PICA dashboard https://app.picaos.com/settings/api-keys)
- **Transport**: stdio (standard input/output)

### Environment Variable

Always store secrets in environment variables, never hardcode them:

```
PICA_SECRET=sk_test_...
OPENAI_API_KEY=sk-...
```

Add them to `.env.local` (or equivalent) and document in `.env.example`. Mastra auto-reads provider API keys from environment (`OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, etc.).

## Using PICA with Mastra

Mastra has **first-class MCP support** via the `@mastra/mcp` package. The `MCPClient` auto-detects transport type from your config — provide `command`/`args` for stdio, or `url` for HTTP/SSE.

### Required packages

```bash
pnpm add @mastra/core @mastra/mcp
```

### Before implementing: look up the latest docs

The Mastra API may change between versions. **Always check the latest docs first:**

- Docs: https://mastra.ai/docs
- MCP guide: https://mastra.ai/docs/mcp/overview
- MCPClient reference: https://mastra.ai/reference/tools/mcp-client
- GitHub: https://github.com/mastra-ai/mastra

### Integration pattern

1. **Create an MCPClient** with `command: "npx"`, `args: ["@picahq/mcp"]` — transport is auto-detected as stdio
2. **List tools** via `await mcp.listTools()` — returns tools in Mastra's format
3. **Create an Agent** with the tools and a model string (`"provider/model-name"`)
4. **Stream** via `agent.stream(messages, { maxSteps: 5 })` — the agent loop handles tool calls automatically
5. **Iterate `fullStream`** for typed chunks (`text-delta`, `tool-call`, `tool-result`) — all data lives on `chunk.payload`
6. **Disconnect** the MCP client when done via `await mcp.disconnect()`

When passing environment variables, spread `process.env` so the subprocess inherits PATH and other system vars:

```typescript
env: {
  ...(process.env as Record<string, string>),
  PICA_SECRET: process.env.PICA_SECRET!,
}
```

### Minimal example

```typescript
import { Agent } from "@mastra/core/agent";
import { MCPClient } from "@mastra/mcp";

const mcp = new MCPClient({
  id: "pica-mcp",
  servers: {
    pica: {
      command: "npx",
      args: ["@picahq/mcp"],
      env: {
        ...(process.env as Record<string, string>),
        PICA_SECRET: process.env.PICA_SECRET!,
      },
    },
  },
});

const tools = await mcp.listTools();

const agent = new Agent({
  id: "pica-assistant",
  name: "PICA Assistant",
  model: "openai/gpt-4o-mini",
  instructions: "You are a helpful assistant.",
  tools,
});

// Non-streaming
const result = await agent.generate("List my connected integrations");
console.log(result.text);

// Streaming
const stream = await agent.stream("List my connected integrations", {
  maxSteps: 5,
});

for await (const chunk of stream.fullStream) {
  if (chunk.type === "text-delta") {
    process.stdout.write(chunk.payload.text);
  } else if (chunk.type === "tool-call") {
    console.log("Tool called:", chunk.payload.toolName, chunk.payload.args);
  } else if (chunk.type === "tool-result") {
    console.log("Tool result:", chunk.payload.toolName, chunk.payload.result);
  }
}

await mcp.disconnect();
```

### Streaming SSE events for a chat UI

When building a Next.js API route, stream responses as SSE events using a `ReadableStream`. Emit events in this format for compatibility with the `PythonChat` frontend component:

- `{ type: "text", content: "..." }` — streamed text chunks
- `{ type: "tool_start", name: "tool_name", input: "..." }` — tool execution starting
- `{ type: "tool_end", name: "tool_name", output: "..." }` — tool execution result
- `{ type: "error", content: "..." }` — error messages
- `data: [DONE]` — stream finished

### Key stream chunk types

Mastra's `fullStream` yields typed chunks where all data lives on `chunk.payload`:

| Chunk Type | Payload Fields | Description |
|:---|:---|:---|
| `text-delta` | `payload.text` | Streamed text content |
| `tool-call` | `payload.toolName`, `payload.toolCallId`, `payload.args` | Tool invocation |
| `tool-result` | `payload.toolName`, `payload.toolCallId`, `payload.result`, `payload.isError` | Tool output |
| `step-finish` | `payload.stepResult`, `payload.output` | Agent step completed |
| `finish` | `payload.stepResult`, `payload.output`, `payload.messages` | Stream finished |
| `error` | `payload.error` | Error occurred |

**Important:** Chunk data is always on `chunk.payload`, not directly on the chunk. For example, use `chunk.payload.text` (not `chunk.textDelta`).

### Model string format

Mastra uses `"provider/model-name"` strings — no separate provider packages needed:

```typescript
model: "openai/gpt-4o-mini"
model: "anthropic/claude-4-5-sonnet"
model: "google/gemini-2.5-flash"
```

### Static vs dynamic tools

**Static** (at agent creation): Merge MCP tools into the agent's `tools`:

```typescript
const agent = new Agent({
  tools: await mcp.listTools(),
});
```

**Dynamic** (per request): Inject via `toolsets` at call time:

```typescript
const result = await agent.generate("...", {
  toolsets: await mcp.listToolsets(),
});
```

Use dynamic toolsets when the MCP client is created per-request (e.g., with user-specific credentials).

## Checklist

When setting up PICA MCP with Mastra:

- [ ] `@mastra/core` is installed
- [ ] `@mastra/mcp` is installed
- [ ] `PICA_SECRET` is set in `.env.local`
- [ ] Provider API key (e.g., `OPENAI_API_KEY`) is set in `.env.local`
- [ ] `.env.example` documents all required env vars
- [ ] `MCPClient` uses `command: "npx"`, `args: ["@picahq/mcp"]` (stdio auto-detected)
- [ ] Full `process.env` is spread into the MCP server's `env` option
- [ ] `MCPClient` has a unique `id` to prevent memory leaks with multiple instances
- [ ] Tools from `mcp.listTools()` are passed to the Agent's `tools`
- [ ] `agent.stream()` is called with `maxSteps` to limit tool call iterations
- [ ] Stream chunks are read from `chunk.payload` (not directly from chunk)
- [ ] `mcp.disconnect()` is called in a `finally` block to clean up connections

Overview

This skill integrates PICA into a Mastra agent using Mastra's MCP support. It shows how to run PICA's MCP server as a local subprocess, provide secrets, and connect MCP tools into Mastra agents for streaming and non-streaming workflows. Use it to add third-party integrations (CRMs, email, calendar, databases) to your Mastra-powered assistant.

How this skill works

The skill creates an MCPClient configured to run @picahq/mcp via npx (stdio transport) and passes environment variables including PICA_SECRET. It lists available tools from the MCP server, injects those tools into an Agent (statically or per-request), then uses agent.generate or agent.stream to handle tool calls automatically. Streamed chunks arrive on stream.fullStream and all data is exposed on chunk.payload (text-delta, tool-call, tool-result, etc.).

When to use it

  • Adding PICA-powered integrations to a Mastra agent (CRMs, email, calendars, databases).
  • Running PICA as a local MCP subprocess via npx/stdio transport.
  • Needing dynamic or static injection of third-party tools into a Mastra agent.
  • Building a chat UI that streams tool execution and results via SSE.
  • Implementing per-user credentials where MCP client is created per-request.

Best practices

  • Store PICA_SECRET and provider API keys in environment variables and document them in .env.example.
  • Run @picahq/mcp via npx with command/args so Mastra auto-detects stdio transport.
  • Spread process.env into the MCP server env so PATH and other vars are inherited.
  • Give each MCPClient a unique id to avoid memory leaks when creating multiple instances.
  • Always read stream chunks from chunk.payload and limit agent iterations with maxSteps.
  • Call mcp.disconnect() in a finally block to ensure cleanup.

Example use cases

  • List and interact with connected integrations via agent.generate for short commands.
  • Stream conversational responses and tool executions to a chat UI using agent.stream and SSE events.
  • Create per-user MCP clients to access user-specific integrations by injecting toolsets at request time.
  • Build workflows that call external services (e.g., send email, query CRM) and surface results in the assistant response.

FAQ

How do I provide the PICA secret to the MCP server?

Set PICA_SECRET in your environment (e.g., .env.local) and spread process.env into the MCPClient server env so the subprocess inherits it.

Should I install @picahq/mcp as a dependency?

You can run it via npx without installing; configure MCPClient with command: 'npx' and args: ['@picahq/mcp'].