home / skills / jeremylongshore / claude-code-plugins-plus-skills / test-coverage-analyzer

This skill analyzes code coverage to identify untested code and generates detailed reports to improve test suites and code quality.

npx playbooks add skill jeremylongshore/claude-code-plugins-plus-skills --skill test-coverage-analyzer

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
2.5 KB
---
name: analyzing-test-coverage
description: |
  This skill analyzes code coverage metrics to identify untested code and generate comprehensive coverage reports. It is triggered when the user requests analysis of code coverage, identification of coverage gaps, or generation of coverage reports. The skill is best used to improve code quality by ensuring adequate test coverage and identifying areas for improvement. Use trigger terms like "analyze coverage", "code coverage report", "untested code", or the shortcut "cov".
---

## Overview

This skill enables Claude to analyze code coverage metrics, pinpoint areas of untested code, and generate detailed reports. It helps you identify gaps in your test suite and ensure comprehensive code coverage.

## How It Works

1. **Coverage Data Collection**: Claude executes the project's test suite with coverage tracking enabled (e.g., using `nyc`, `coverage.py`, or JaCoCo).
2. **Report Generation**: The plugin parses the coverage data and generates a detailed report, including metrics for line, branch, function, and statement coverage.
3. **Uncovered Code Identification**: Claude highlights specific lines or blocks of code that are not covered by any tests.

## When to Use This Skill

This skill activates when you need to:
- Analyze the overall code coverage of your project.
- Identify specific areas of code that lack test coverage.
- Generate a detailed report of code coverage metrics.
- Enforce minimum code coverage thresholds.

## Examples

### Example 1: Analyzing Project Coverage

User request: "Analyze code coverage for the entire project"

The skill will:
1. Execute the project's test suite with coverage tracking.
2. Generate a comprehensive coverage report, showing line, branch, and function coverage.

### Example 2: Identifying Untested Code

User request: "Show me the untested code in the `src/utils.js` file"

The skill will:
1. Analyze the coverage data for `src/utils.js`.
2. Highlight the lines of code in `src/utils.js` that are not covered by any tests.

## Best Practices

- **Configuration**: Ensure your project has a properly configured coverage tool (e.g., `nyc` in package.json).
- **Thresholds**: Define minimum coverage thresholds to enforce code quality standards.
- **Report Review**: Regularly review coverage reports to identify and address coverage gaps.

## Integration

This skill can be integrated with other testing and CI/CD tools to automate coverage analysis and reporting. For example, it can be used in conjunction with a linting plugin to identify both code style issues and coverage gaps.

Overview

This skill analyzes code coverage metrics to identify untested code and generate comprehensive coverage reports. It helps teams discover coverage gaps, prioritize tests, and track coverage trends over time. Use trigger phrases like "analyze coverage", "code coverage report", "untested code", or the shortcut "cov" to run the analysis.

How this skill works

The skill runs the project test suite with coverage instrumentation (for example, coverage.py, nyc, or JaCoCo), then parses the resulting coverage data. It produces aggregated metrics (line, branch, function, statement), maps coverage to specific files and line ranges, and highlights uncovered blocks. The output includes a human-readable report plus actionable suggestions for tests to add or thresholds to enforce.

When to use it

  • When you need a project-wide summary of test coverage before a release or merge.
  • When you want to find specific files, functions, or lines that lack tests.
  • When enforcing or verifying minimum coverage thresholds in CI pipelines.
  • When preparing refactors and wanting to ensure behavior remains tested.
  • When auditing tests after adding new features to avoid regressions.

Best practices

  • Ensure your project has a configured coverage tool (e.g., coverage.py, nyc, JaCoCo) and the test command works locally.
  • Run coverage analysis as part of CI to detect regressions early and fail builds on threshold violations.
  • Set realistic minimum coverage thresholds and incrementally raise them as quality improves.
  • Focus on critical logic paths and edge cases first when addressing uncovered code.
  • Include the coverage report artifacts in build logs or dashboards for visibility.

Example use cases

  • Analyze code coverage for the entire project to create a release readiness report.
  • Show untested code in src/utils.py (or src/utils.js) so engineers can write targeted unit tests.
  • Generate a branch and line coverage report to decide which areas need integration tests.
  • Enforce a 80% coverage gate in CI and fail the pipeline when coverage drops.
  • Compare coverage before and after a refactor to validate that tests still exercise behavior.

FAQ

Which coverage tools are supported?

It supports common coverage tools that output standard reports, such as coverage.py, nyc, and JaCoCo, by parsing their coverage artifacts.

Can this run inside CI and fail a build on low coverage?

Yes. Integrate the skill in your CI pipeline and configure minimum coverage thresholds so the pipeline can fail when metrics fall below the target.