home / skills / yonatangross / orchestkit / run-tests

run-tests skill

/plugins/ork/skills/run-tests

This skill executes tests in parallel with failure analysis and coverage reporting, helping you identify issues and generate actionable test reports.

npx playbooks add skill yonatangross/orchestkit --skill run-tests

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
2.9 KB
---
name: run-tests
description: Comprehensive test execution with parallel analysis and coverage reporting. Use when running test suites or troubleshooting failures with the run-tests workflow.
context: fork
version: 1.0.0
author: OrchestKit
tags: [testing, pytest, coverage, test-execution]
user-invocable: false
---

# Run Tests

Test execution with parallel analysis agents for failures.

## Quick Start

```bash
/run-tests
/run-tests backend
/run-tests frontend
/run-tests tests/unit/test_auth.py
```

## Test Scope

| Argument | Scope |
|----------|-------|
| Empty/`all` | All tests |
| `backend` | Backend only |
| `frontend` | Frontend only |
| `path/to/test.py` | Specific file |
| `test_name` | Specific test |

## Phase 1: Execute Tests

```bash
# Backend with coverage
cd backend
poetry run pytest tests/unit/ -v --tb=short \
  --cov=app --cov-report=term-missing

# Frontend with coverage
cd frontend
npm run test -- --coverage
```

## Phase 2: Failure Analysis

If tests fail, launch 3 parallel analyzers:
1. **Backend Failure Analysis** - Root cause, fix suggestions
2. **Frontend Failure Analysis** - Component issues, mock problems
3. **Coverage Gap Analysis** - Low coverage areas

## Phase 3: Generate Report

```markdown
# Test Results Report

## Summary
| Suite | Total | Passed | Failed | Coverage |
|-------|-------|--------|--------|----------|
| Backend | X | Y | Z | XX% |
| Frontend | X | Y | Z | XX% |

## Status: [ALL PASS | SOME FAILURES]

## Failures (if any)
| Test | Error | Fix |
|------|-------|-----|
| test_name | AssertionError | [suggestion] |
```

## Quick Commands

```bash
# All backend tests
poetry run pytest tests/unit/ -v --tb=short

# With coverage
poetry run pytest tests/unit/ --cov=app

# Quick (no tracebacks)
poetry run pytest tests/unit/ --tb=no -q

# Specific test
poetry run pytest tests/unit/ -k "test_name" -v

# Frontend
npm run test -- --coverage

# Watch mode
npm run test -- --watch
```

## Key Options

| Option | Purpose |
|--------|---------|
| `--maxfail=3` | Stop after 3 failures |
| `-x` | Stop on first failure |
| `--lf` | Run only last failed |
| `-v` | Verbose output |
| `--tb=short` | Shorter tracebacks |

## Related Skills

- `unit-testing` - Unit test patterns and best practices
- `integration-testing` - Integration test patterns for component interactions
- `e2e-testing` - End-to-end testing with Playwright
- `test-data-management` - Test data fixtures and factories

## Key Decisions

| Decision | Choice | Rationale |
|----------|--------|-----------|
| Parallel Analyzers | 3 agents | Backend, frontend, and coverage analysis in parallel |
| Default Traceback | `--tb=short` | Balance between detail and readability |
| Stop Threshold | `--maxfail=3` | Quick feedback without overwhelming output |
| Coverage Tool | pytest-cov / jest | Native integration with test frameworks |

## References

- [Test Commands](references/test-commands.md)

Overview

This skill runs comprehensive test suites with parallel failure analysis and coverage reporting. It orchestrates backend and frontend test execution, spawns analyzers for failures, and produces a consolidated test results report. Use it to speed up debugging and to maintain reliable coverage metrics.

How this skill works

The skill executes tests for the selected scope (all, backend, frontend, or a specific path/test) using native test runners (pytest for backend, npm/jest for frontend) with coverage enabled. If failures occur, it launches three parallel analyzers: backend failure analysis, frontend failure analysis, and coverage gap analysis. Finally it aggregates results into a markdown report summarizing totals, failed tests with suggested fixes, and coverage percentages.

When to use it

  • Running full CI-style test runs locally or in CI
  • Narrowing scope to backend or frontend during development
  • Debugging a failing test or reproducing flaky tests
  • Generating coverage reports to guide test improvements
  • Quick triage after a pull request introduces test regressions

Best practices

  • Run backend tests with pytest and pytest-cov to collect coverage (--cov and --cov-report options)
  • Use --tb=short and -v for concise but useful tracebacks; use --maxfail to limit noisy failures
  • When reproducing a single failure, run the specific test path or -k test_name to iterate quickly
  • Enable parallel analyzers only when failures occur to save resources
  • Include the generated markdown report in PRs or issue comments to speed reviewer understanding

Example use cases

  • /run-tests to run the entire project test suite and output a combined report
  • /run-tests backend to execute backend unit tests with coverage and surface low-coverage modules
  • /run-tests frontend to run Jest tests with coverage and flag component or mock breakages
  • /run-tests tests/unit/test_auth.py to reproduce a specific failing test and get targeted suggestions
  • Use the generated report to populate a CI artifact or attach it to a bug ticket for triage

FAQ

What arguments can I pass to scope the run?

You can run with no argument (all), backend, frontend, a file path, or a test name to limit scope.

How are failures analyzed?

Three analyzers run in parallel: backend analysis for root causes and fixes, frontend analysis for UI/component issues, and coverage gap analysis to identify untested code paths.

Can I change traceback verbosity?

Yes — use pytest flags like --tb=short (default recommended), --tb=no for compact output, or -v for verbose test names.