home / skills / third774 / dotfiles / ralph-prd

ralph-prd skill

/opencode/skills/ralph-prd

This skill generates structured prd.json files for autonomous agent loops to plan bulk tasks with verifiable completion criteria.

npx playbooks add skill third774/dotfiles --skill ralph-prd

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
4.9 KB
---
name: ralph-prd
description: Generate structured prd.json files for autonomous agent loops (Ralph Wiggum pattern). Use when planning bulk/batch tasks, migrations, refactoring campaigns, or any work that can be decomposed into independent items with verification steps.
---

# Ralph PRD Generation

Generate `prd.json` files that define scoped work items for autonomous agent execution. Each item has explicit completion criteria and verification steps.

## When to Use

- Batch migrations (API changes, library upgrades, lint fixes)
- Large-scale refactoring across many files
- Any task decomposable into independent, verifiable units
- Work that benefits from "done" being explicitly defined

## PRD Structure

```json
{
  "instructions": "<markdown with context, examples, constraints>",
  "items": [
    {
      "id": "<unique identifier>",
      "category": "<task category>",
      "description": "<what needs to be done>",
      "file": "<target file path>",
      "steps": [
        "<action step>",
        "<verification step>"
      ],
      "passes": false,
      "skipped": null
    }
  ]
}
```

### Field Reference

| Field | Purpose |
|-------|---------|
| `instructions` | Markdown embedded in PRD - transformation examples, docs links, constraints |
| `id` | Unique identifier (typically file path or task name) |
| `category` | Groups related items |
| `description` | Human-readable summary |
| `steps` | Actions + verification commands |
| `passes` | `false` initially, `true` when complete |
| `skipped` | `null` or `"<reason>"` if task cannot be completed |

## Generation Workflow

```
PRD Generation Progress:
- [ ] Step 1: Define scope (what files/items are affected?)
- [ ] Step 2: Gather input data (lint output, file list, API changes)
- [ ] Step 3: Design item granularity (per-file, per-error, per-component?)
- [ ] Step 4: Define verification steps (type-check, tests, lint)
- [ ] Step 5: Write instructions (examples, constraints, skip conditions)
- [ ] Step 6: Generate items (script or manual)
- [ ] Step 7: Review sample items
```

## Clarifying Questions

Before generating, resolve these with the user:

### Granularity
- Per-file? Per-error? Per-component?
- Trade-off: fewer items = less overhead, more items = finer progress tracking

### Verification Steps  
- What commands confirm completion?
- Type-check? Tests? Lint? Build?
- Which tests - related test file only, or broader?

### Instructions Content
- What context does the executing agent need?
- Before/after examples?
- Links to documentation?
- Type casting or naming conventions?

### Skip Conditions
- What should cause an item to be skipped rather than fixed?
- Example: "class component requires manual refactor"

### Path Format
- Relative or absolute paths?
- ID format (filename only risks collisions)

## Instructions Section Best Practices

The `instructions` field is markdown that the executing agent reads. Include:

1. **Violation/task types** with before/after examples
2. **Scope rules** - what's in bounds, what's out
3. **Skip conditions** - when to mark `skipped: "<reason>"` instead of fixing
4. **Links** to relevant documentation
5. **Type/naming conventions** specific to the codebase

Keep instructions focused. The agent discovers patterns; instructions provide guardrails.

## Verification Steps

Each item should have at least one verification step. Common patterns:

```json
"steps": [
  "Fix all N lint errors for rule-name",
  "Run yarn type-check:go - must pass",
  "Run yarn test <path> - if test exists"
]
```

For test detection, check:
- `__tests__/<filename>.test.{ts,tsx,js,jsx}`
- `<filename>.test.{ts,tsx,js,jsx}` sibling
- `__tests__/integration/<filename>.test.*`

## Example: Generating from Lint Output

Input: JSON array of lint errors grouped by file

```javascript
const prd = {
  instructions: `## Migration Instructions...`,
  items: lintErrors.map(entry => ({
    id: entry.filePath.replace(REPO_ROOT + '/', ''),
    category: 'migration',
    description: `Fix violations in ${path.basename(entry.filePath)}`,
    file: entry.filePath,
    errorCount: entry.errorCount,
    steps: [
      `Fix all ${entry.errorCount} lint errors`,
      'Run yarn type-check:go - must pass',
      ...(testExists ? [`Run yarn test ${testPath}`] : [])
    ],
    passes: false,
    skipped: null
  }))
};
```

## Anti-Patterns

### Vague verification
```json
// Bad
"steps": ["Fix the issue", "Make sure it works"]

// Good  
"steps": ["Fix lint error on line 42", "Run yarn type-check:go - must pass"]
```

### Missing skip conditions
If some items can't be completed (e.g., requires larger refactor), define skip conditions in instructions so agents mark `skipped` instead of attempting impossible fixes.

### Over-scoped items
Items that touch many files are harder to verify and resume. Prefer one file per item for file-based migrations.

### Under-specified instructions
The executing agent shouldn't have to guess conventions. Specify type casting, naming patterns, import sources.

Overview

This skill generates structured prd.json files that break large work into independent, verifiable items following the Ralph Wiggum pattern. It produces a scoped instructions block plus per-item fields (id, category, description, file, steps, passes, skipped) so autonomous agents can execute, verify, and report progress. Use it to plan bulk migrations, refactors, or any batch task that benefits from explicit “done” criteria.

How this skill works

The skill inspects input sources such as file lists, lint output, or a user-defined scope and emits a prd.json with an instructions markdown and an items array. Each item is one independent unit (typically file-scoped) with concrete action and verification steps. Items start with passes:false and skipped:null so agents can update status as they complete or skip tasks.

When to use it

  • Batch migrations: API upgrades, dependency bumps, codemods
  • Large-scale refactors touching many files or components
  • Lint or formatting campaigns that can be fixed per-file
  • Test or type-fix sweeps where each file can be verified independently
  • Any work you want tracked as discrete, verifiable items

Best practices

  • Prefer per-file granularity to simplify verification and retries
  • Embed clear before/after examples and constraints in instructions markdown
  • Define precise verification commands (type-check, test, lint) for each item
  • Include explicit skip conditions and reasons to avoid wasted attempts
  • Group related items with a meaningful category to aid filtering and progress reports

Example use cases

  • Generate PRD from eslint output to create one item per file with lint fix and type-check steps
  • Create a migration PRD for a library upgrade, listing files that need import updates and verification commands
  • Plan a refactor campaign by producing items per component with steps to update props and run related tests
  • Prepare a test-stabilization PRD where each flaky test file becomes an item with reproduce and fix steps
  • Bulk-rename or codemod work where each affected file is an item and verification runs the test or build

FAQ

How should I choose item granularity?

Choose per-file for easier verification and retryability; use coarser grouping only when changes are tightly coupled and must be applied together.

What belongs in the instructions field?

Include scope rules, before/after examples, skip conditions, and links to docs or codemods so the executing agent has unambiguous guardrails.