home / skills / levnikolaevich / claude-code-skills / ln-782-test-runner

ln-782-test-runner skill

/ln-782-test-runner

This skill detects test frameworks, runs all suites, and reports results with optional coverage to streamline test validation.

npx playbooks add skill levnikolaevich/claude-code-skills --skill ln-782-test-runner

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
4.6 KB
---
name: ln-782-test-runner
description: Executes all test suites and reports results with coverage
---

> **Paths:** File paths (`shared/`, `references/`, `../ln-*`) are relative to skills repo root. If not found at CWD, locate this SKILL.md directory and go up one level for repo root.

# ln-782-test-runner

**Type:** L3 Worker
**Category:** 7XX Project Bootstrap
**Parent:** ln-780-bootstrap-verifier

---

## Purpose

Detects test frameworks, executes all test suites, and reports results including pass/fail counts and optional coverage.

**Scope:**
- Auto-detect test frameworks from project configuration
- Execute test suites for all detected frameworks
- Parse test output for pass/fail counts
- Generate coverage reports when enabled

**Out of Scope:**
- Building projects (handled by ln-781)
- Container operations (handled by ln-783)
- Writing or fixing tests

---

## When to Use

| Scenario | Use This Skill |
|----------|---------------|
| Called by ln-780 orchestrator | Yes |
| Standalone test execution | Yes |
| CI/CD pipeline test step | Yes |
| Build verification needed | No, use ln-781 |

---

## Workflow

### Step 1: Detect Test Frameworks

Identify test frameworks from project configuration files.

| Marker | Test Framework | Project Type |
|--------|---------------|--------------|
| vitest.config.* | Vitest | Node.js |
| jest.config.* | Jest | Node.js |
| *.test.ts in package.json | Vitest/Jest | Node.js |
| xunit / nunit in *.csproj | xUnit/NUnit | .NET |
| pytest.ini / conftest.py | pytest | Python |
| *_test.go files | go test | Go |
| tests/ with Cargo.toml | cargo test | Rust |

### Step 2: Execute Test Suites

Run tests for each detected framework.

| Framework | Execution Strategy |
|-----------|-------------------|
| Vitest | Run in single-run mode with JSON reporter |
| Jest | Run with JSON output |
| xUnit/NUnit | Run with logger for structured output |
| pytest | Run with JSON plugin or verbose output |
| go test | Run with JSON output flag |
| cargo test | Run with standard output parsing |

### Step 3: Parse Results

Extract test results from framework output.

| Metric | Description |
|--------|-------------|
| total | Total number of tests discovered |
| passed | Tests that completed successfully |
| failed | Tests that failed assertions |
| skipped | Tests marked as skip/ignore |
| duration | Total execution time |

### Step 4: Generate Coverage (Optional)

When coverage enabled, collect coverage metrics.

| Framework | Coverage Tool |
|-----------|--------------|
| Vitest/Jest | c8 / istanbul |
| .NET | coverlet |
| pytest | pytest-cov |
| Go | go test -cover |
| Rust | cargo-tarpaulin |

**Coverage Metrics:**
| Metric | Description |
|--------|-------------|
| linesCovered | Lines executed during tests |
| linesTotal | Total lines in codebase |
| percentage | Coverage percentage |

### Step 5: Report Results

Return structured results to orchestrator.

**Result Structure:**

| Field | Description |
|-------|-------------|
| suiteName | Test suite identifier |
| framework | Detected test framework |
| status | passed / failed / error |
| total | Total test count |
| passed | Passed test count |
| failed | Failed test count |
| skipped | Skipped test count |
| duration | Execution time in seconds |
| failures | Array of failure details (test name, message) |
| coverage | Coverage metrics (if enabled) |

---

## Error Handling

| Error Type | Action |
|------------|--------|
| No tests found | Report warning, status = passed (0 tests) |
| Test timeout | Report timeout, include partial results |
| Framework error | Log error, report as error status |
| Missing dependencies | Report missing test dependencies |

---

## Options

| Option | Default | Description |
|--------|---------|-------------|
| skipTests | false | Skip execution if no tests found |
| allowFailures | false | Report success even if tests fail |
| coverage | false | Generate coverage report |
| timeout | 300 | Max execution time in seconds |
| parallel | true | Run test suites in parallel when possible |

---

## Critical Rules

1. **Run all detected test suites** - do not skip suites silently
2. **Parse actual results** - do not rely only on exit code
3. **Include failure details** - provide actionable information for debugging
4. **Respect timeout** - prevent hanging on infinite loops

---

## Definition of Done

- [ ] All test frameworks detected
- [ ] All test suites executed
- [ ] Results parsed and structured
- [ ] Coverage collected (if enabled)
- [ ] Results returned to orchestrator

---

## Reference Files

- Parent: `../ln-780-bootstrap-verifier/SKILL.md`

---

**Version:** 2.0.0
**Last Updated:** 2026-01-10

Overview

This skill detects project test frameworks, runs all test suites, parses results, and optionally generates coverage reports. It returns structured, actionable results to an orchestrator so CI/CD and automation can make decisions. The skill focuses solely on test execution and reporting, not building or container orchestration.

How this skill works

It scans common configuration files and code markers to auto-detect frameworks (Vitest, Jest, pytest, go test, cargo test, xUnit/NUnit, etc.). For each detected framework it invokes the appropriate runner with machine-friendly output, parses test counts and failure details, and aggregates timing. When coverage is enabled it runs the native coverage tool for the framework and normalizes coverage metrics. Final results are returned as a structured payload per suite.

When to use it

  • Triggered by the ln-780 orchestrator to verify test health
  • As a standalone step to run all project tests locally
  • As the test stage in CI/CD pipelines
  • When you need consolidated pass/fail and failure details across frameworks
  • When optional coverage metrics are required for quality gates

Best practices

  • Enable coverage only when needed to reduce runtime and resource use
  • Respect the configured timeout to avoid hanging test runs
  • Run suites in parallel where supported to speed feedback
  • Provide project dependencies and environment so framework runners succeed
  • Fail fast on missing test dependencies and report clear remediation steps

Example use cases

  • Repository bootstrap step that verifies tests across Node, Python, Go, and .NET
  • CI pipeline job that outputs structured test and coverage results for downstream gates
  • Local developer pre-merge check to catch cross-framework regressions
  • Nightly test sweep that collects coverage trends across services

FAQ

What happens if no tests are found?

The skill reports a warning and returns a successful status with zero tests unless skipTests is configured to treat this differently.

Does it rely only on exit codes to determine pass/fail?

No. The skill parses framework output to extract counts and failure details rather than depending solely on exit codes.

Can it run tests in parallel?

Yes. By default parallel execution is allowed when the framework supports it; this can be disabled via options.