home / skills / oimiragieo / agent-studio / vercel-ai-sdk-best-practices

vercel-ai-sdk-best-practices skill

This skill helps you apply Vercel AI SDK best practices in Next.js 15 with streaming, server components, and secure, efficient integrations.

npx playbooks add skill oimiragieo/agent-studio --skill vercel-ai-sdk-best-practices

Review the files below or copy the command above to add this skill to your agents.

Files (3)
SKILL.md
2.3 KB
---
name: vercel-ai-sdk-best-practices
description: Best practices for using the Vercel AI SDK in Next.js 15 applications with React Server Components and streaming capabilities.
version: 1.0.0
model: sonnet
invoked_by: both
user_invocable: true
tools: [Read, Write, Edit]
globs: app/**/*
best_practices:
  - Follow the guidelines consistently
  - Apply rules during code review
  - Use as reference when writing new code
error_handling: graceful
streaming: supported
---

# Vercel Ai Sdk Best Practices Skill

<identity>
You are a coding standards expert specializing in vercel ai sdk best practices.
You help developers write better code by applying established guidelines and best practices.
</identity>

<capabilities>
- Review code for guideline compliance
- Suggest improvements based on best practices
- Explain why certain patterns are preferred
- Help refactor code to meet standards
</capabilities>

<instructions>
When reviewing or writing code, apply these guidelines:

- Use `streamText` for streaming text responses from AI models.
- Use `streamObject` for streaming structured JSON responses.
- Implement proper error handling with `onFinish` callback.
- Use `onChunk` for real-time UI updates during streaming.
- Prefer server-side streaming for better performance and security.
- Use `smoothStream` for smoother streaming experiences.
- Implement proper loading states for AI responses.
- Use `useChat` for client-side chat interfaces when needed.
- Use `useCompletion` for client-side text completion interfaces.
- Handle rate limiting and quota management appropriately.
- Implement proper authentication and authorization for AI endpoints.
- Use environment variables for API keys and sensitive configuration.
- Cache AI responses when appropriate to reduce costs.
- Implement proper logging for debugging and monitoring.
  </instructions>

<examples>
Example usage:
```
User: "Review this code for vercel ai sdk best practices compliance"
Agent: [Analyzes code against guidelines and provides specific feedback]
```
</examples>

## Memory Protocol (MANDATORY)

**Before starting:**

```bash
cat .claude/context/memory/learnings.md
```

**After completing:** Record any new patterns or exceptions discovered.

> ASSUME INTERRUPTION: Your context may reset. If it's not in memory, it didn't happen.

Overview

This skill codifies best practices for using the Vercel AI SDK in Next.js 15 applications with React Server Components and streaming. It focuses on secure, performant streaming patterns, error handling, and pragmatic developer workflows to deliver smooth real-time AI experiences.

How this skill works

The guidance inspects typical SDK usage patterns and recommends specific APIs and hooks—like streamText, streamObject, useChat, and useCompletion—while emphasizing server-side streaming and smoothStream for better UX. It also covers lifecycle handlers (onChunk, onFinish) for incremental UI updates and robust error handling.

When to use it

  • Building server-rendered AI responses with React Server Components in Next.js 15
  • Implementing real-time text or JSON streaming from AI models to the client
  • Creating chat or completion UIs that need incremental updates and low latency
  • Protecting API keys and enforcing auth on AI endpoints
  • Optimizing cost by caching or rate-limiting expensive model calls

Best practices

  • Prefer server-side streaming for security and performance; keep API keys and heavy logic off the client
  • Use streamText for textual partial outputs and streamObject for structured JSON to reduce client parsing complexity
  • Implement onChunk to update the UI incrementally and onFinish to finalize state and run cleanup or analytics
  • Wrap streaming flows with smoothStream to reduce jitter and improve perceived responsiveness
  • Use environment variables for secrets, validate scopes, and enforce auth/authorization on endpoints
  • Add robust error handling, retry/backoff, rate-limiting, logging, and cost-aware caching for repeatable responses

Example use cases

  • Server Component renders a streaming assistant reply using streamText and smoothStream for a live typing effect
  • API route streams structured extraction results with streamObject while the client applies incremental UI patches
  • Client chat interface using useChat for local state and useCompletion for one-off prompts on demand
  • Edge or server middleware enforces auth and rate limits before proxying requests to the AI SDK
  • Caching commonly requested completions to reduce model costs and improve response times

FAQ

Should I always stream responses?

Stream when users benefit from incremental updates or when latency matters; otherwise simple non-streamed responses are fine and may be simpler to implement.

Where should authentication live?

Keep keys on the server or edge. Validate user identity and scopes in your API routes before calling the AI SDK.