home / skills / mjunaidca / mjs-agent-skills / systematic-debugging

systematic-debugging skill

/.claude/skills/systematic-debugging

This skill guides you through root-cause debugging before fixes, ensuring systematic investigation, pattern comparison, and minimal, validated changes.

npx playbooks add skill mjunaidca/mjs-agent-skills --skill systematic-debugging

Review the files below or copy the command above to add this skill to your agents.

Files (3)
SKILL.md
5.6 KB
---
name: systematic-debugging
description: |
  Systematic methodology for debugging bugs, test failures, and unexpected behavior.
  Use when encountering any technical issue before proposing fixes. Covers root cause
  investigation, pattern analysis, hypothesis testing, and fix implementation.
  Use ESPECIALLY when under time pressure, "just one quick fix" seems obvious, or
  you've already tried multiple fixes. NOT for exploratory code reading.
---

# Systematic Debugging

Random fixes waste time and create new bugs. Quick patches mask underlying issues.

**Core principle:** ALWAYS find root cause before attempting fixes. Symptom fixes are failure.

## The Iron Law

```
NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST
```

If you haven't completed Phase 1, you cannot propose fixes.

---

## The Four Phases

### Phase 1: Root Cause Investigation

**BEFORE attempting ANY fix:**

1. **Read Error Messages Carefully**
   - Don't skip past errors or warnings
   - Read stack traces completely
   - Note line numbers, file paths, error codes

2. **Reproduce Consistently**
   - Can you trigger it reliably?
   - What are the exact steps?
   - If not reproducible, gather more data - don't guess

3. **Check Recent Changes**
   - Git diff, recent commits
   - New dependencies, config changes
   - Environmental differences

4. **Gather Evidence in Multi-Component Systems**

   When system has multiple components (CI -> build -> signing, API -> service -> database):

   ```
   For EACH component boundary:
     - Log what data enters component
     - Log what data exits component
     - Verify environment/config propagation

   Run once to gather evidence showing WHERE it breaks
   THEN analyze to identify failing component
   ```

5. **Trace Data Flow**

   See [references/root-cause-tracing.md](references/root-cause-tracing.md) for backward tracing technique.

   Quick version: Where does bad value originate? Keep tracing up until you find the source. Fix at source, not symptom.

### Phase 2: Pattern Analysis

1. **Find Working Examples** - Locate similar working code in same codebase
2. **Compare Against References** - Read reference implementations COMPLETELY, don't skim
3. **Identify Differences** - List every difference between working and broken
4. **Understand Dependencies** - What settings, config, environment assumptions?

### Phase 3: Hypothesis and Testing

1. **Form Single Hypothesis** - "I think X is the root cause because Y"
2. **Test Minimally** - SMALLEST possible change, one variable at a time
3. **Verify Before Continuing** - Worked? Phase 4. Didn't? NEW hypothesis, don't stack fixes

### Phase 4: Implementation

1. **Create Failing Test Case** - Simplest reproduction, automated if possible
2. **Implement Single Fix** - ONE change, no "while I'm here" improvements
3. **Verify Fix** - Test passes? No regressions?

4. **If Fix Doesn't Work:**
   - Count: How many fixes have you tried?
   - If < 3: Return to Phase 1, re-analyze
   - **If >= 3: STOP and question the architecture**

5. **If 3+ Fixes Failed: Question Architecture**

   Pattern indicating architectural problem:
   - Each fix reveals new shared state/coupling
   - Fixes require "massive refactoring"
   - Each fix creates new symptoms elsewhere

   **STOP. Discuss with user before attempting more fixes.**

---

## Red Flags - STOP and Follow Process

If you catch yourself thinking:
- "Quick fix for now, investigate later"
- "Just try changing X and see"
- "Add multiple changes, run tests"
- "I'm confident it's X, let me fix that"
- "One more fix attempt" (when already tried 2+)
- Proposing solutions before tracing data flow

**ALL of these mean: STOP. Return to Phase 1.**

---

## Supporting Techniques

### Defense-in-Depth

When you fix a bug, validate at EVERY layer:

| Layer | Purpose | Example |
|-------|---------|---------|
| Entry Point | Reject invalid input at API boundary | `if (!dir) throw new Error('dir required')` |
| Business Logic | Ensure data makes sense for operation | Validate before processing |
| Environment Guards | Prevent dangerous ops in specific contexts | Refuse git init outside tmpdir in tests |
| Debug Instrumentation | Capture context for forensics | Log with stack trace before dangerous ops |

Single validation feels sufficient, but different code paths bypass it. Make bugs structurally impossible.

### Condition-Based Waiting

Flaky tests guess at timing. Wait for actual conditions instead:

```python
# BAD: Guessing at timing
await asyncio.sleep(0.05)
result = get_result()

# GOOD: Wait for condition
await wait_for(lambda: get_result() is not None)
result = get_result()
```

Pattern:
```python
async def wait_for(condition, timeout_ms=5000):
    start = time.time()
    while True:
        if condition():
            return
        if (time.time() - start) * 1000 > timeout_ms:
            raise TimeoutError("Condition not met")
        await asyncio.sleep(0.01)  # Poll every 10ms
```

---

## Common Rationalizations

| Excuse | Reality |
|--------|---------|
| "Issue is simple, don't need process" | Simple issues have root causes too. Process is fast for simple bugs. |
| "Emergency, no time for process" | Systematic debugging is FASTER than guess-and-check thrashing. |
| "Just try this first, then investigate" | First fix sets the pattern. Do it right from the start. |
| "I see the problem, let me fix it" | Seeing symptoms != understanding root cause. |
| "One more fix attempt" (after 2+ failures) | 3+ failures = architectural problem. Question pattern, don't fix again. |

---

## Verification

Run: `python scripts/verify.py`

## References

- [references/root-cause-tracing.md](references/root-cause-tracing.md) - Trace bugs backward through call stack

Overview

This skill teaches a disciplined, four‑phase methodology for diagnosing and fixing bugs, test failures, and unexpected behavior. It enforces a strict rule: never propose or apply fixes until you have completed a root cause investigation. Use it to replace guesswork with reproducible evidence, minimal hypotheses, and verified fixes.

How this skill works

The skill guides you through Phase 1 (root cause investigation), Phase 2 (pattern analysis), Phase 3 (single‑hypothesis testing), and Phase 4 (safe implementation). It emphasizes reproducible repro steps, tracing data flow across component boundaries, comparing against working references, making the smallest possible change to test a hypothesis, and creating failing tests before fixing. It also defines hard stop conditions when repeated attempts indicate an architectural problem.

When to use it

  • When you encounter any technical issue before proposing fixes
  • When a quick patch seems obvious or you feel time pressure
  • After multiple failed fix attempts (to avoid thrashing)
  • When failures are intermittent or unreproducible
  • When debugging multi‑component systems or CI/test failures

Best practices

  • Always reproduce the issue reliably and record exact steps and environment
  • Read full error messages and stack traces; note line numbers and error codes
  • Trace bad values back to their origin and fix at the source, not symptoms
  • Find a working reference in the codebase and list every difference
  • Form a single hypothesis and test the smallest change possible
  • Create an automated failing test first and verify no regressions after the fix

Example use cases

  • A CI job failing only on the runner: trace inputs/outputs at each pipeline boundary to find environment mismatch
  • A flaky async test: switch from fixed sleeps to condition‑based waiting and write a deterministic wait helper
  • A new dependency causing runtime errors: compare initialization code against a known working example and test one config change at a time
  • Repeated test fixes that keep breaking other tests: stop after three failed attempts and raise an architecture review
  • Investigation of a production error where logs cross service boundaries: gather evidence from each component to localize the fault

FAQ

What if I need to patch production immediately?

If an immediate mitigation is unavoidable, document it as a temporary measure and run Phase 1 concurrently. Treat any quick patch as provisional and return to the full process before merging permanent fixes.

How many changes can I try before stopping?

Use one hypothesis per test. If you have tried three or more distinct fixes without resolving the root cause, pause and reassess the architecture or escalate—further blind fixes are high risk.