home / skills / vercel / ai / capture-api-response-test-fixture

capture-api-response-test-fixture skill

/skills/capture-api-response-test-fixture

This skill captures and structures API response test fixtures from provider outputs to streamline testing and fixture generation.

npx playbooks add skill vercel/ai --skill capture-api-response-test-fixture

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.0 KB
---
name: capture-api-response-test-fixture
description: Capture API response test fixture.
metadata:
  internal: true
---

### API Response Test Fixtures

For provider response parsing tests, we aim at storing test fixtures with the true responses from the providers (unless they are too large in which case some cutting that does not change semantics is advised).

The fixtures are stored in a `__fixtures__` subfolder, e.g. `packages/openai/src/responses/__fixtures__`. See the file names in `packages/openai/src/responses/__fixtures__` for naming conventions and `packages/openai/src/responses/openai-responses-language-model.test.ts` for how to set up test helpers.

You can use our examples under `/examples/ai-functions` to generate test fixtures.

#### generateText (doGenerate testing)

For `generateText`, log the raw response output to the console and copy it into a new test fixture.

```ts
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { run } from '../lib/run';

run(async () => {
  const result = await generateText({
    model: openai('gpt-5-nano'),
    prompt: 'Invent a new holiday and describe its traditions.',
  });

  console.log(JSON.stringify(result.response.body, null, 2));
});
```

#### streamText (doStream testing)

For `streamText`, you need to set `includeRawChunks` to `true` and use the special `saveRawChunks` helper. Run the script from the `/example/ai-functions` folder via `pnpm tsx src/stream-text/script-name.ts`. The result is then stored in the `/examples/ai-functions/output` folder. You can copy it to your fixtures folder and rename it.

```ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
import { run } from '../lib/run';
import { saveRawChunks } from '../lib/save-raw-chunks';

run(async () => {
  const result = streamText({
    model: openai('gpt-5-nano'),
    prompt: 'Invent a new holiday and describe its traditions.',
    includeRawChunks: true,
  });

  await saveRawChunks({ result, filename: 'openai-gpt-5-nano' });
});
```

Overview

This skill captures real API response test fixtures for language-model provider integrations. It helps teams record true provider outputs (or trimmed equivalents) so parser and behavior tests run against realistic data. The goal is reliable, reproducible tests for generate and stream flows.

How this skill works

The skill logs and saves raw provider responses into a __fixtures__ folder alongside the provider code. For standard generateText calls you serialize the response body; for streaming responses you enable includeRawChunks and use a helper that writes the raw chunks to disk. Saved fixtures become canonical inputs for parsing and unit tests.

When to use it

  • Creating unit tests for provider response parsing
  • Verifying behavior changes after model upgrades
  • Reproducing edge-case provider outputs observed in production
  • Building CI regression tests that depend on provider responses
  • Sharing standardized sample responses across repositories

Best practices

  • Store fixtures under a __fixtures__ subfolder next to the provider code for discoverability
  • Keep fixtures as raw as possible; only trim very large fields without changing semantics
  • Name files consistently to reflect provider, model, and endpoint
  • Use generateText logging for non-streamed responses and includeRawChunks + save helper for streams
  • Add a short test or helper that validates fixtures remain parsable to catch schema drift

Example use cases

  • Run a script that calls generateText, console.log the response.body, and paste JSON into packages/openai/src/responses/__fixtures__
  • Execute a streamText script with includeRawChunks:true and saveRawChunks to capture chunked outputs for streaming tests
  • Capture rare error payloads from a noisy model to reproduce and fix parsing bugs
  • Use example app outputs (examples/ai-functions/output) as the basis for official fixtures
  • Create CI checks that compare current parsing results against stored fixtures to detect regressions

FAQ

What should I do if a response is too large?

Trim fields that don't affect parsed semantics, but keep structure and representative content intact. Document any trimming in the fixture filename or an adjacent README.

How do I capture streaming responses?

Enable includeRawChunks on the streamText call and use the provided saveRawChunks helper to write the raw chunks to the examples output folder, then move the saved file into the fixtures directory.