home / skills / andrelandgraf / fullstackrecipes / ralph-loop

ralph-loop skill

/skills/ralph-loop

This skill streamlines AI-driven development by defining user stories with testable criteria and coordinating looping agent runs until all criteria pass.

npx playbooks add skill andrelandgraf/fullstackrecipes --skill ralph-loop

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
1.7 KB
---
name: ralph-loop
description: Complete setup for automated agent-driven development. Define features as user stories with testable acceptance criteria, then run AI agents in a loop until all stories pass.
---

# Ralph Loop

Complete setup for automated agent-driven development. Define features as user stories with testable acceptance criteria, then run AI agents in a loop until all stories pass.

## Prerequisites

Complete these recipes first (in order):

### AI Coding Agent Configuration

Configure AI coding agents like Cursor, GitHub Copilot, or Claude Code with project-specific patterns, coding guidelines, and MCP servers for consistent AI-assisted development.

```bash
curl -H "Accept: text/markdown" https://fullstackrecipes.com/api/recipes/agent-setup
```

## Cookbook - Complete These Recipes in Order

### User Stories Setup

Create a structured format for documenting feature requirements as user stories. JSON files with testable acceptance criteria that AI agents can verify and track.

```bash
curl -H "Accept: text/markdown" https://fullstackrecipes.com/api/recipes/user-stories-setup
```

### Working with User Stories

Document and track feature implementation with user stories. Workflow for authoring stories, building features, and marking acceptance criteria as passing.

```bash
curl -H "Accept: text/markdown" https://fullstackrecipes.com/api/recipes/using-user-stories
```

### Ralph Agent Loop

Set up automated agent-driven development with Ralph. Run AI agents in a loop to implement features from user stories, verify acceptance criteria, and log progress for the next agent.

```bash
curl -H "Accept: text/markdown" https://fullstackrecipes.com/api/recipes/ralph-setup
```

Overview

This skill provides a complete setup for automated agent-driven development using Ralph Loop. Define features as user stories with testable acceptance criteria, then run AI agents in a loop that implement, test, and iterate until all stories pass. It bundles step-by-step recipes for configuring AI coding agents, authoring user stories, and orchestrating the Ralph agent loop for continuous delivery. The focus is practical automation to move from requirements to verified code with minimal manual coordination.

How this skill works

You author user stories in a structured, machine-readable format (JSON) including clear, testable acceptance criteria. Configure AI coding agents with project patterns and coding guidelines so they produce consistent output. Launch the Ralph Loop: agents read pending stories, generate code, run tests or verification steps, report results, and hand off failing items into the next iteration. The loop repeats until every acceptance criterion is satisfied and progress is logged for traceability.

When to use it

  • When you want automated, repeatable delivery from user stories to verified code
  • When teams use AI coding agents and need consistent, project-specific configuration
  • When acceptance criteria are testable and can be expressed in a machine-readable format
  • When you need to reduce manual review cycles by automating implement-test-iterate
  • When building full-stack web AI apps with reproducible patterns and recipes

Best practices

  • Write user stories with explicit, measurable acceptance criteria that agents can run or validate
  • Standardize coding patterns and linters in the agent configuration to ensure consistent outputs
  • Start with small, well-scoped stories to validate the loop before scaling to larger features
  • Maintain a history of agent iterations and test results for debugging and auditing
  • Integrate CI and automated tests so agents can run verifications in the same environment developers use

Example use cases

  • Add a new authenticated API endpoint with acceptance criteria verifying response shape and auth behavior
  • Implement a UI feature with end-to-end tests described in the story and validated by the loop
  • Refactor a module and require regression tests to pass before the story is marked complete
  • Bootstrapping common full-stack patterns using the provided recipes and agent templates
  • Continuous feature backlog execution where agents pick the next ready story and iterate until passing

FAQ

What format should user stories use?

Use structured JSON files that include a clear description, individual acceptance criteria, and any test commands or assertions the agent can execute.

Which AI agents are supported?

The setup is agent-agnostic but provides recipes for popular coding agents (e.g., Cursor, GitHub Copilot, Claude Code) and guidance to configure them with project-specific patterns.