home / skills / openai / openai-agents-js / test-coverage-improver

test-coverage-improver skill

/.codex/skills/test-coverage-improver

This skill analyzes test coverage results from pnpm test:coverage, identifies gaps, and drafts high-impact test ideas for review.

npx playbooks add skill openai/openai-agents-js --skill test-coverage-improver

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.8 KB
---
name: test-coverage-improver
description: 'Improve test coverage in the OpenAI Agents JS monorepo: run `pnpm test:coverage`, inspect coverage artifacts, identify low-coverage files and branches, propose high-impact tests, and confirm with the user before writing tests.'
---

# Test Coverage Improver

## Overview

Use this skill whenever coverage needs assessment or improvement (coverage regressions, failing thresholds, or user requests for stronger tests). It runs the coverage suite, analyzes results, highlights the biggest gaps, and prepares test additions while confirming with the user before changing code.

## Quick Start

1. From the repo root run `pnpm test:coverage` (set `CI=1` if needed) to regenerate `coverage/`.
2. Collect artifacts: `coverage/coverage-summary.json` (preferred) or `coverage/coverage-final.json`, plus `coverage/lcov.info` and `coverage/lcov-report/index.html` for drill-downs.
3. Summarize coverage: total percentages, lowest files, branches under 80%, and uncovered lines/paths.
4. Draft test ideas per file: scenario, behavior under test, expected outcome, and likely coverage gain.
5. Ask the user for approval to implement the proposed tests; pause until they agree.
6. After approval, write the tests in the relevant package, rerun `pnpm test:coverage`, and then run `$code-change-verification` before marking work complete.

## Workflow Details

- **Run coverage**: Execute `CI=1 pnpm test:coverage` at repo root. Avoid watch flags and keep prior coverage artifacts only if comparing trends.
- **Parse summaries efficiently**:
  - Prefer `coverage/coverage-summary.json` for file-level totals; fallback to `coverage/coverage-final.json` if the summary file is absent.
  - Use `coverage/lcov.info` or `coverage/lcov-report/index.html` to spot branch- and line-level holes.
- **Prioritize targets**:
  - Public APIs or shared utilities in `packages/*/src` before examples or docs.
  - Files with statements/branches below 80% or newly added code at 0%.
  - Recent bug fixes or risky code paths (error handling, retries, timeouts, concurrency).
- **Design impactful tests**:
  - Hit uncovered branches: error cases, boundary inputs, optional flags, and cancellation/timeouts.
  - Cover combinational logic rather than trivial happy paths.
  - Place unit tests near the package (`packages/<pkg>/test/*.test.ts`) and avoid flaky async timing.
- **Coordinate with the user**: Present a numbered, concise list of proposed test additions and expected coverage gains. Ask explicitly before editing code or fixtures.
- **After implementation**: Rerun coverage, report the updated summary, and note any remaining low-coverage areas.

## Notes

- Keep any added comments or code in English.
- Do not create `scripts/`, `references/`, or `assets/` unless needed later.
- If coverage artifacts are missing or stale, rerun `pnpm test:coverage` instead of guessing.

Overview

This skill improves test coverage in the OpenAI Agents JS monorepo by running the coverage suite, inspecting artifacts, and proposing targeted tests. It identifies low-coverage files and branches, prioritizes high-impact areas, and asks for explicit user approval before writing tests or changing code.

How this skill works

Run CI=1 pnpm test:coverage to regenerate coverage artifacts, then parse coverage/coverage-summary.json (or coverage-final.json) and lcov data to spot gaps. The skill lists lowest files, branches under threshold, and uncovered lines, drafts test scenarios to exercise missing paths, and presents them to the user for confirmation. After approval it implements tests in the appropriate packages, reruns coverage, and verifies improvements.

When to use it

  • Coverage thresholds are failing or regressions are suspected
  • A release needs stronger quality gates for core packages
  • New or changed code shows 0% or low coverage
  • You want a prioritized plan for where tests will have highest impact
  • Before merging risky bug fixes involving error handling, retries, or concurrency

Best practices

  • Prefer coverage/coverage-summary.json for quick file-level totals; use lcov only for branch and line drilling
  • Prioritize public APIs and shared utilities in packages/*/src before examples or docs
  • Target uncovered branches: error paths, boundary values, optional flags, and cancellations
  • Write unit tests next to the package (packages/<pkg>/test/*.test.ts) and avoid flaky async timing
  • Present a numbered, concise list of proposed tests and expected coverage gains; require explicit user approval

Example use cases

  • Detecting which packages dropped below 80% branch coverage after a refactor and proposing specific tests to restore it
  • Adding tests to exercise error-handling and retry logic in a shared network utility identified as low-coverage
  • Creating boundary-case inputs to cover combinational logic in an agent orchestration module
  • Rerunning coverage after implementing tests to verify the reported improvement and remaining hotspots

FAQ

What coverage artifacts does this skill need?

Prefer coverage/coverage-summary.json; fallback to coverage/coverage-final.json. Use coverage/lcov.info or coverage/lcov-report/index.html for branch- and line-level inspection.

Will tests be added without my approval?

No. The skill drafts proposed tests and asks for explicit user confirmation before editing or adding test files.