home / skills / krosebrook / source-of-truth-monorepo / test-fixing

This skill systematically identifies, groups, and fixes failing tests to make the test suite green and protect code quality.

npx playbooks add skill krosebrook/source-of-truth-monorepo --skill test-fixing

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
3.0 KB
---
name: test-fixing
description: Run tests and systematically fix all failing tests using smart error grouping. Use when user asks to fix failing tests, mentions test failures, runs test suite and failures occur, or requests to make tests pass. Activates on phrases like "fix the tests", "tests are failing", or "make the test suite green".
---

# Test Fixing Workflow

Systematically identify and fix all failing tests using smart grouping strategies.

## When to Use

Automatically activate when the user:
- Explicitly asks to fix tests ("fix these tests", "make tests pass")
- Reports test failures ("tests are failing", "test suite is broken")
- Completes implementation and wants tests passing
- Mentions CI/CD failures due to tests

## Systematic Approach

### 1. Initial Test Run

Run `make test` to identify all failing tests.

Analyze output for:
- Total number of failures
- Error types and patterns
- Affected modules/files

### 2. Smart Error Grouping

Group similar failures by:
- **Error type**: ImportError, AttributeError, AssertionError, etc.
- **Module/file**: Same file causing multiple test failures
- **Root cause**: Missing dependencies, API changes, refactoring impacts

Prioritize groups by:
- Number of affected tests (highest impact first)
- Dependency order (fix infrastructure before functionality)

### 3. Systematic Fixing Process

For each group (starting with highest impact):

1. **Identify root cause**
   - Read relevant code
   - Check recent changes with `git diff`
   - Understand the error pattern

2. **Implement fix**
   - Use Edit tool for code changes
   - Follow project conventions (see CLAUDE.md)
   - Make minimal, focused changes

3. **Verify fix**
   - Run subset of tests for this group
   - Use pytest markers or file patterns:
     ```bash
     uv run pytest tests/path/to/test_file.py -v
     uv run pytest -k "pattern" -v
     ```
   - Ensure group passes before moving on

4. **Move to next group**

### 4. Fix Order Strategy

**Infrastructure first:**
- Import errors
- Missing dependencies
- Configuration issues

**Then API changes:**
- Function signature changes
- Module reorganization
- Renamed variables/functions

**Finally, logic issues:**
- Assertion failures
- Business logic bugs
- Edge case handling

### 5. Final Verification

After all groups fixed:
- Run complete test suite: `make test`
- Verify no regressions
- Check test coverage remains intact

## Best Practices

- Fix one group at a time
- Run focused tests after each fix
- Use `git diff` to understand recent changes
- Look for patterns in failures
- Don't move to next group until current passes
- Keep changes minimal and focused

## Example Workflow

User: "The tests are failing after my refactor"

1. Run `make test` → 15 failures identified
2. Group errors:
   - 8 ImportErrors (module renamed)
   - 5 AttributeErrors (function signature changed)
   - 2 AssertionErrors (logic bugs)
3. Fix ImportErrors first → Run subset → Verify
4. Fix AttributeErrors → Run subset → Verify
5. Fix AssertionErrors → Run subset → Verify
6. Run full suite → All pass ✓

Overview

This skill automates running tests and systematically fixes all failing tests using smart error grouping. It focuses on identifying root causes, prioritizing high-impact groups, and applying minimal, targeted changes to get the TypeScript monorepo test suite green. The goal is reliable, repeatable test recovery with minimal disruption to the codebase.

How this skill works

It begins with a full test run (make test) to collect failure metadata: counts, error types, and affected files. Failures are grouped by error type, module, and root cause to prioritize fixes. For each group the skill identifies the root cause, implements minimal edits, and verifies the fix by running focused test subsets before progressing. After all groups pass, it runs the full suite to confirm no regressions.

When to use it

  • When a user asks to "fix the tests" or "make the test suite green".
  • When tests fail locally or in CI and the user reports failing tests.
  • After a large refactor or repository merge that introduces multiple test failures.
  • When CI is blocked by test failures and a quick, structured remediation is required.
  • When you want a repeatable, prioritized approach to repair many related failures.

Best practices

  • Run a complete test pass first to capture all failures and patterns.
  • Group similar failures by error type, file/module, or root cause before changing code.
  • Prioritize infrastructure errors (imports, deps, config) before API or logic fixes.
  • Make minimal, focused edits and verify them with targeted test runs.
  • Use git diff to understand recent changes that likely introduced failures.
  • Don’t proceed to the next group until the current group’s tests reliably pass.

Example use cases

  • A refactor renamed modules and 20 tests fail with ImportError; fix imports first and re-run targeted tests.
  • CI shows multiple unrelated failures; group errors to address common root causes and unblock the pipeline quickly.
  • After updating external dependencies, several tests fail; identify API changes and adapt calls in affected modules.
  • A developer reports failing tests after merging a feature branch; run make test, group errors, and apply focused fixes to restore build health.
  • A monorepo consolidation caused cross-package import breaks; repair dependency references and validate with subset runs.

FAQ

What commands does the skill run to validate fixes?

Primary commands are make test for full runs and focused pytest invocations (uv run pytest tests/path/to/test_file.py -v or uv run pytest -k "pattern" -v) for targeted verification.

How does it decide which failures to fix first?

It prioritizes by number of affected tests and dependency order: infrastructure (imports/deps/config) first, then API/signature changes, then logic/assertion fixes.

Will the skill make large refactors to fix tests?

No. It aims for minimal, focused changes scoped to the failure group to reduce regression risk and preserve test coverage.