home / skills / jst-well-dan / skill-box / test-fixing

test-fixing skill

/no-code-builder/test-fixing

This skill runs tests, groups failures smartly, and guides targeted fixes to get the full Python test suite passing.

npx playbooks add skill jst-well-dan/skill-box --skill test-fixing

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.9 KB
---
name: test-fixing
description: Run tests and systematically fix all failing tests using smart error grouping. Use when user asks to fix failing tests, mentions test failures, runs test suite and failures occur, or requests to make tests pass.
author: mhattingpete
---

# Test Fixing

Systematically identify and fix all failing tests using smart grouping strategies.

## When to Use

- Explicitly asks to fix tests ("fix these tests", "make tests pass")
- Reports test failures ("tests are failing", "test suite is broken")
- Completes implementation and wants tests passing
- Mentions CI/CD failures due to tests

## Systematic Approach

### 1. Initial Test Run

Run `make test` to identify all failing tests.

Analyze output for:
- Total number of failures
- Error types and patterns
- Affected modules/files

### 2. Smart Error Grouping

Group similar failures by:
- **Error type**: ImportError, AttributeError, AssertionError, etc.
- **Module/file**: Same file causing multiple test failures
- **Root cause**: Missing dependencies, API changes, refactoring impacts

Prioritize groups by:
- Number of affected tests (highest impact first)
- Dependency order (fix infrastructure before functionality)

### 3. Systematic Fixing Process

For each group (starting with highest impact):

1. **Identify root cause**
   - Read relevant code
   - Check recent changes with `git diff`
   - Understand the error pattern

2. **Implement fix**
   - Use Edit tool for code changes
   - Follow project conventions (see CLAUDE.md)
   - Make minimal, focused changes

3. **Verify fix**
   - Run subset of tests for this group
   - Use pytest markers or file patterns:
     ```bash
     uv run pytest tests/path/to/test_file.py -v
     uv run pytest -k "pattern" -v
     ```
   - Ensure group passes before moving on

4. **Move to next group**

### 4. Fix Order Strategy

**Infrastructure first:**
- Import errors
- Missing dependencies
- Configuration issues

**Then API changes:**
- Function signature changes
- Module reorganization
- Renamed variables/functions

**Finally, logic issues:**
- Assertion failures
- Business logic bugs
- Edge case handling

### 5. Final Verification

After all groups fixed:
- Run complete test suite: `make test`
- Verify no regressions
- Check test coverage remains intact

## Best Practices

- Fix one group at a time
- Run focused tests after each fix
- Use `git diff` to understand recent changes
- Look for patterns in failures
- Don't move to next group until current passes
- Keep changes minimal and focused

## Example Workflow

User: "The tests are failing after my refactor"

1. Run `make test` → 15 failures identified
2. Group errors:
   - 8 ImportErrors (module renamed)
   - 5 AttributeErrors (function signature changed)
   - 2 AssertionErrors (logic bugs)
3. Fix ImportErrors first → Run subset → Verify
4. Fix AttributeErrors → Run subset → Verify
5. Fix AssertionErrors → Run subset → Verify
6. Run full suite → All pass ✓

Overview

This skill runs a project's test suite, groups failing tests by root cause, and applies targeted fixes until the suite passes. It focuses on systematic diagnosis, prioritizing high-impact error groups and making minimal, safe code changes. The goal is repeatable, verifiable repair of failing tests with clear verification steps.

How this skill works

Start by running the full test command (make test) to collect failures and their contexts. Group failures by error type, file/module, and likely root cause, then fix groups in a prioritized order: infrastructure, API/shape changes, then logic. After each change run focused tests for that group, and finally run the full suite to confirm no regressions.

When to use it

  • You explicitly ask to fix failing tests or make the test suite pass.
  • CI or local runs report test failures after a change or refactor.
  • You completed an implementation and need the project green before merging.
  • Multiple related test failures suggest a shared root cause that can be fixed systematically.
  • You want a repeatable workflow to triage and repair test regressions.

Best practices

  • Run focused tests after each targeted fix (pytest -k or by file) to limit feedback scope.
  • Group failures by error type, module, and root cause before changing code.
  • Prioritize fixes: infrastructure (imports, deps, config) → API/shape changes → logic.
  • Make minimal, well-scoped edits and commit frequently with clear messages.
  • Use git diff to inspect recent changes and avoid regressions from unrelated edits.

Example use cases

  • After a refactor, 15 tests fail: group by 8 ImportErrors, 5 AttributeErrors, 2 AssertionErrors and fix in order.
  • CI pipeline shows import errors due to a renamed package — fix imports and re-run focused tests.
  • Function signature changed across modules causing AttributeErrors — update call sites and tests incrementally.
  • Logic bug surfaced by assertions — isolate failing tests, reproduce locally, implement fix, then validate full suite.

FAQ

How do you decide which group to fix first?

Prioritize by impact and dependency order: infrastructure issues (imports, deps) first because they block many tests, then API/shape changes, and finally individual logic failures.

What commands do you use for focused verification?

Run targeted pytest invocations (by file or -k pattern), for example pytest tests/path/to/test_file.py -v or pytest -k "pattern" -v, before running make test for full verification.