home / skills / andrelandgraf / fullstackrecipes / user-stories-setup

user-stories-setup skill

/.agents/skills/user-stories-setup

This skill helps teams document feature requirements as testable user stories in JSON, enabling AI agents to verify and track progress.

npx playbooks add skill andrelandgraf/fullstackrecipes --skill user-stories-setup

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
550 B
---
name: user-stories-setup
description: Create a structured format for documenting feature requirements as user stories. JSON files with testable acceptance criteria that AI agents can verify and track.
---

# User Stories Setup

To set up User Stories Setup, refer to the fullstackrecipes MCP server resource:

**Resource URI:** `recipe://fullstackrecipes.com/user-stories-setup`

If the MCP server is not configured, fetch the recipe directly:

```bash
curl -H "Accept: text/plain" https://fullstackrecipes.com/api/recipes/user-stories-setup
```

Overview

This skill creates a structured, machine-readable format for documenting feature requirements as user stories. It outputs JSON files with clear, testable acceptance criteria so AI agents can verify, track, and report progress. The format emphasizes consistency, traceability, and automation-ready fields for integration with test runners and issue trackers.

How this skill works

The skill generates user story objects that include title, role, goal, acceptance criteria, priority, and metadata such as tags and estimates. Each acceptance criterion is expressed as a discrete, testable condition with expected inputs, outputs, and pass/fail rules that agents can execute or simulate. The skill also supports retrieving a canonical recipe from a registry endpoint and returning a ready-to-use JSON file suitable for CI pipelines and agent-based verification.

When to use it

  • When you need consistent, machine-readable user stories for automation and testing
  • When onboarding teams to a standard format for requirements and acceptance criteria
  • When integrating AI agents to verify story acceptance in CI/CD or test harnesses
  • When converting ad-hoc product notes into structured, trackable tasks
  • When preparing epics and sprint-ready stories with explicit pass/fail rules

Best practices

  • Write acceptance criteria as independent, atomic test cases with clear inputs and expected outputs
  • Include role and goal phrasing (As a..., I want..., So that...) for clarity and stakeholder context
  • Assign a single measurable outcome per acceptance criterion to avoid ambiguity
  • Use consistent tags and estimates to enable filtering, reporting, and agent prioritization
  • Store JSON files in version control and link them to corresponding test suites or pipelines

Example use cases

  • Auto-generating sprint stories that agents can validate during pull request checks
  • Converting product manager notes into acceptance-driven tasks for dev and QA
  • Feeding structured stories into an AI planner to sequence work and estimate effort
  • Linking story JSON to regression tests so agents can report pass/fail status automatically
  • Creating traceability between design artifacts and testable acceptance criteria

FAQ

How are acceptance criteria formatted for agent verification?

Each criterion is a discrete JSON object with a description, input conditions, expected output, and a boolean pass/fail rule so agents can run checks or assertions programmatically.

Where does the skill fetch the canonical recipe?

The skill can retrieve the recipe from a registry endpoint or accept the recipe payload directly so you can generate story JSON even without registry access.