home / skills / arustydev / ai / method-reproduce-reduce-regress-eng
/components/skills/method-reproduce-reduce-regress-eng
This skill guides debugging by reproducing real-data failures, reducing inputs to minimal cases, and anchoring regressions with robust tests.
npx playbooks add skill arustydev/ai --skill method-reproduce-reduce-regress-engReview the files below or copy the command above to add this skill to your agents.
---
name: reproduce-reduce-regress
description: Systematic workflow for debugging by reproducing bugs with real data, reducing test cases to minimal examples, and adding regression tests
---
# Reproduce, Reduce, Regress
When debugging a bug, follow this workflow:
## 1. Reproduce
**Goal:** Get a failing test that demonstrates the bug using REAL data.
- Copy the EXACT input that triggers the bug (don't paraphrase or simplify yet)
- Use the EXACT types/structs from the failing code
- Verify the test actually fails with the same error message
## 2. Reduce
**Goal:** Find the MINIMAL input that still triggers the bug.
- Create MULTIPLE test variants, don't comment things in/out
- Name them descriptively: `test_minimal_one_field`, `test_with_queries`, etc.
- Binary search: remove half the input, see if it still fails
- Keep narrowing until you find the smallest failing case
- Also find a PASSING case that's as close as possible to the failing one
The difference between your minimal failing case and your minimal passing case IS the bug.
## 3. Regress (Regression Tests)
**Goal:** Ensure the bug never comes back.
- Keep ALL your test variants - both passing and failing
- The failing ones become regression tests after the fix
- The passing ones document expected behavior
- Name tests after the issue number: `test_issue_1356_*`
## Anti-patterns
❌ Commenting code in/out to test different scenarios
❌ Modifying a single test repeatedly
❌ "Simplifying" input without verifying the bug still reproduces
❌ Deleting test variants after finding the bug
❌ Theorizing about what MIGHT cause the bug before reproducing
## Example Structure
```rust
// Minimal failing case
#[test]
fn test_issue_1356_fails_without_queries_default() { ... }
// Minimal passing case (shows the workaround)
#[test]
fn test_issue_1356_passes_with_queries_default() { ... }
// Original reproduction from user's code
#[test]
fn test_issue_1356_full_reproduction() { ... }
```
This skill provides a systematic debug workflow: reproduce the bug with real data, reduce the input to a minimal failing case, and add regression tests so the bug cannot silently return. It emphasizes exact reproduction, careful minimization, and preserving both failing and nearby passing tests as documentation and safeguards.
Start by capturing the exact input, types, and environment that cause the failure and write a test that reproduces the error message. Create multiple test variants and iteratively remove or simplify parts of the input until you find the smallest failing case and a closest passing counterpart. Keep all variants and convert the failing cases into regression tests named after the issue for long-term protection.
What if the bug disappears when I try to reproduce it?
Capture the original runtime state: exact input, versions, environment variables, and any external dependencies. Use logs, core dumps, or trace data to recreate the conditions. If intermittent, try to increase logging or add deterministic seeds.
How small should the minimal failing case be?
As small as necessary to still trigger the same error and stack trace. Smaller tests are easier to reason about and more likely to isolate the root cause; stop reducing once removing any more detail makes the failure go away.