home / skills / sickn33 / antigravity-awesome-skills / test-fixing
This skill systematically identifies and fixes failing tests by grouping errors and applying minimal, focused code changes to restore a green test suite.
npx playbooks add skill sickn33/antigravity-awesome-skills --skill test-fixingReview the files below or copy the command above to add this skill to your agents.
---
name: test-fixing
description: Run tests and systematically fix all failing tests using smart error grouping. Use when user asks to fix failing tests, mentions test failures, runs test suite and failures occur, or requests to make tests pass.
---
# Test Fixing
Systematically identify and fix all failing tests using smart grouping strategies.
## When to Use
- Explicitly asks to fix tests ("fix these tests", "make tests pass")
- Reports test failures ("tests are failing", "test suite is broken")
- Completes implementation and wants tests passing
- Mentions CI/CD failures due to tests
## Systematic Approach
### 1. Initial Test Run
Run `make test` to identify all failing tests.
Analyze output for:
- Total number of failures
- Error types and patterns
- Affected modules/files
### 2. Smart Error Grouping
Group similar failures by:
- **Error type**: ImportError, AttributeError, AssertionError, etc.
- **Module/file**: Same file causing multiple test failure
- **Root cause**: Missing dependencies, API changes, refactoring impacts
Prioritize groups by:
- Number of affected tests (highest impact first)
- Dependency order (fix infrastructure before functionality)
### 3. Systematic Fixing Process
For each group (starting with highest impact):
1. **Identify root cause**
- Read relevant code
- Check recent changes with `git diff`
- Understand the error pattern
2. **Implement fix**
- Use Edit tool for code changes
- Follow project conventions (see CLAUDE.md)
- Make minimal, focused changes
3. **Verify fix**
- Run subset of tests for this group
- Use pytest markers or file patterns:
```bash
uv run pytest tests/path/to/test_file.py -v
uv run pytest -k "pattern" -v
```
- Ensure group passes before moving on
4. **Move to next group**
### 4. Fix Order Strategy
**Infrastructure first:**
- Import errors
- Missing dependencies
- Configuration issues
**Then API changes:**
- Function signature changes
- Module reorganization
- Renamed variables/functions
**Finally, logic issues:**
- Assertion failures
- Business logic bugs
- Edge case handling
### 5. Final Verification
After all groups fixed:
- Run complete test suite: `make test`
- Verify no regressions
- Check test coverage remains intact
## Best Practices
- Fix one group at a time
- Run focused tests after each fix
- Use `git diff` to understand recent changes
- Look for patterns in failures
- Don't move to next group until current passes
- Keep changes minimal and focused
## Example Workflow
User: "The tests are failing after my refactor"
1. Run `make test` → 15 failures identified
2. Group errors:
- 8 ImportErrors (module renamed)
- 5 AttributeErrors (function signature changed)
- 2 AssertionErrors (logic bugs)
3. Fix ImportErrors first → Run subset → Verify
4. Fix AttributeErrors → Run subset → Verify
5. Fix AssertionErrors → Run subset → Verify
6. Run full suite → All pass ✓
This skill runs the test suite and systematically fixes all failing tests using smart error grouping and a prioritized repair workflow. It focuses on identifying root causes, grouping similar failures, and applying minimal, targeted fixes. The goal is to get the full test suite green while avoiding regressions and preserving project conventions.
Start by running the full test suite (for example, make test) to collect failures and their context. Group failures by error type, affected module, and likely root cause, then prioritize groups by impact and dependency order. For each group you identify the root cause, implement a minimal fix, and verify using focused test runs before moving on. Finish by running the full suite to confirm no regressions and preserved coverage.
How do you prioritize which failures to fix first?
Prioritize by number of affected tests and dependency order: infrastructure issues (imports, deps, config) come first, then API/signature changes, then logic assertions.
How do you verify fixes without running the whole suite?
Run focused tests for the affected files or markers using pytest file patterns or -k expressions to confirm group-level fixes before a final full run.
What if a fix causes new unrelated failures?
Revert or isolate the change, review recent diffs for unintended impacts, and fix the introduced regression before proceeding; keep changes minimal to reduce this risk.