home / skills / brixtonpham / claude-config / stitch
This skill generates UI screens from text prompts, exports to React components, and creates DESIGN.md design systems to accelerate UI workflows.
npx playbooks add skill brixtonpham/claude-config --skill stitchReview the files below or copy the command above to add this skill to your agents.
---
name: stitch
description: "Google Stitch UI design tool. Generate screens from text prompts, convert designs to React components, create DESIGN.md design systems. Use when: designing UI, generating screens, converting Stitch to code, creating design tokens. Keywords: stitch, design, UI, screen, generate, react, components, DESIGN.md, wireframe, prototype, mockup."
allowed-tools:
- "stitch:*"
- "Read"
- "Write"
- "Bash"
- "WebFetch"
---
# Stitch UI Design Skill
Google Stitch MCP integration for AI-powered UI design generation.
## Workflows
### 1. Generate New Screen
```bash
mcp-cli call stitch/generate_screen_from_text '{"projectId": "ID", "prompt": "description", "deviceType": "DESKTOP"}'
```
### 2. Export to React
→ Invoke `react:components` skill after getting screen
### 3. Create Design System
→ Invoke `design-md` skill to generate DESIGN.md
## MCP Tools
| Tool | Parameters |
|------|------------|
| `stitch/list_projects` | filter: "view=owned" or "view=shared" |
| `stitch/create_project` | title: string |
| `stitch/get_project` | name: "projects/{id}" |
| `stitch/list_screens` | projectId: "projects/{id}" |
| `stitch/get_screen` | projectId, screenId |
| `stitch/generate_screen_from_text` | projectId, prompt, deviceType, modelId |
## Related Skills
- `design-md` - Extract design tokens → DESIGN.md
- `react:components` - Convert screens → React code
This skill integrates with Google Stitch to generate UI screens from text prompts, export designs to React components, and produce a DESIGN.md design system. It streamlines the flow from idea to interactive mockup by combining generation, project management, and code export. Use it to accelerate UI iteration, create consistent design tokens, and get production-ready component code.
The skill talks to Stitch project APIs to list, create, and retrieve projects and screens. It generates new screens from natural-language prompts and device targets, then pairs with a React conversion step to export components. It can also feed design tokens into a DESIGN.md generator to produce a living design system document.
What inputs are required to generate a screen?
Provide a projectId, a natural-language prompt describing the UI, and a deviceType (e.g., DESKTOP or MOBILE).
How do I get code from a generated screen?
After generating a screen, invoke the react:components conversion workflow to export React component code.