home / skills / andrelandgraf / fullstackrecipes / ai-chat

ai-chat skill

/skills/ai-chat

This skill helps you build and manage a full-stack AI chat app with persistence, chat list features, and automatic title generation.

npx playbooks add skill andrelandgraf/fullstackrecipes --skill ai-chat

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
3.4 KB
---
name: ai-chat
description: Build a complete AI chat application with database persistence, chat list management, and automatic title generation.
---

# AI Chat

Build a complete AI chat application with database persistence, chat list management, and automatic title generation.

## Prerequisites

Complete these recipes first (in order):

### Type-Safe Environment Configuration

Type-safe environment variable validation using Zod with a Drizzle-like schema API. Supports server/public fields, feature flags, either-or constraints, and client-side protection.

```bash
curl -H "Accept: text/markdown" https://fullstackrecipes.com/api/recipes/config-schema-setup
```

### Neon + Drizzle Setup

Connect a Next.js app to Neon Postgres using Drizzle ORM with optimized connection pooling for Vercel serverless functions.

```bash
curl -H "Accept: text/markdown" https://fullstackrecipes.com/api/recipes/neon-drizzle-setup
```

### Next.js on Vercel

Create a Next.js app running on Bun, configure the development environment, and deploy to Vercel with automatic deployments on push.

```bash
curl -H "Accept: text/markdown" https://fullstackrecipes.com/api/recipes/nextjs-on-vercel
```

### Shadcn UI & Theming

Add Shadcn UI components with dark mode support using next-themes. Includes theme provider and CSS variables configuration.

```bash
curl -H "Accept: text/markdown" https://fullstackrecipes.com/api/recipes/shadcn-ui-setup
```

### Authentication

Complete authentication system with Better Auth, email verification, password reset, protected routes, and account management.

```bash
curl -H "Accept: text/markdown" https://fullstackrecipes.com/api/recipes/authentication
```

### URL State with nuqs

Sync React state to URL query parameters for shareable filters, search queries, and deep links to modal dialogs. Preserves UI state on browser back/forward navigation.

```bash
curl -H "Accept: text/markdown" https://fullstackrecipes.com/api/recipes/nuqs-setup
```

### Pino Logging Setup

Configure structured logging with Pino. Outputs human-readable colorized logs in development and structured JSON in production for log aggregation services.

```bash
curl -H "Accept: text/markdown" https://fullstackrecipes.com/api/recipes/pino-logging-setup
```

### Workflow Development Kit Setup

Install and configure the Workflow Development Kit for resumable, durable AI agent workflows with step-level persistence, stream resumption, and agent orchestration.

```bash
curl -H "Accept: text/markdown" https://fullstackrecipes.com/api/recipes/workflow-setup
```

## Cookbook - Complete These Recipes in Order

### AI Chat Persistence with Neon

Persist AI chat conversations to Neon Postgres with full support for AI SDK message parts including tools, reasoning, and streaming. Uses UUID v7 for chronologically-sortable IDs.

```bash
curl -H "Accept: text/markdown" https://fullstackrecipes.com/api/recipes/ai-chat-persistence
```

### Chat List & Management

Build a chat list page with search, rename, and delete functionality. Uses nuqs for URL-synced filters and deep-linkable modal dialogs.

```bash
curl -H "Accept: text/markdown" https://fullstackrecipes.com/api/recipes/chat-list
```

### Automatic Chat Naming

Generate descriptive chat titles from the first message using a fast LLM. Runs as a background workflow step after the main response to avoid delaying the experience.

```bash
curl -H "Accept: text/markdown" https://fullstackrecipes.com/api/recipes/chat-naming
```

Overview

This skill builds a complete AI chat application with persistent storage, chat list management, and automatic title generation. It combines a TypeScript full-stack pattern with Neon Postgres, Drizzle ORM, Shadcn UI, and background workflows to deliver a production-ready chat experience. The recipe collection provides step-by-step guidance and integrations for deployment on Vercel.

How this skill works

Conversations are persisted to Neon Postgres using Drizzle and UUID v7 for chronologically-sortable IDs. The chat UI uses Shadcn components and nuqs-driven URL state for shareable filters and deep links. Background workflows generate chat titles with a small LLM after responses are stored, and logging plus auth guard access and observability.

When to use it

  • You need a full-stack AI chat app with reliable persistence and scalable serverless deployment.
  • You want URL-synced chat lists and deep-linkable modals for collaboration or support flows.
  • You require consistent, production-ready patterns (auth, logging, environment validation) before adding AI features.
  • You want automatic, non-blocking chat title generation to improve organization and UX.

Best practices

  • Validate environment variables with a type-safe schema before runtime to avoid leaking secrets to the client.
  • Persist streaming AI messages and tool outputs so partial results can be resumed and audited.
  • Run title generation as an asynchronous background step to keep user-facing latency low.
  • Use UUID v7 or another time-sortable ID for predictable chronological ordering in lists.
  • Configure Pino structured logs for development-friendly output and production JSON ingestion.

Example use cases

  • Customer support chat where each conversation is stored and searchable, and titles summarize ticket topics.
  • Internal team assistant that keeps a history of question/answer threads with rename and delete controls.
  • SaaS product chat widget that logs AI recommendations and streams responses while preserving state across sessions.
  • Knowledge base builder that converts initial queries into titled threads for later curation.

FAQ

Does the recipe include authentication and deployment guidance?

Yes—authentication using Better Auth and deployment steps for Next.js on Vercel are included as prerequisite recipes.

How are chat titles generated without slowing responses?

Titles are produced by a fast LLM in a background workflow step that runs after the main response is saved, preventing added latency for users.