home / skills / eyadsibai / ltk / test-coverage

This skill analyzes test coverage, identifies gaps, and suggests targeted tests to improve reliability and assurance across Python projects.

npx playbooks add skill eyadsibai/ltk --skill test-coverage

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
6.7 KB
---
name: Test Coverage
description: This skill should be used when the user asks to "analyze test coverage", "find untested code", "check coverage gaps", "improve test coverage", "identify missing tests", "measure code coverage", "test quality analysis", or mentions testing strategies and coverage metrics.
version: 1.0.0
---

# Test Coverage Analysis

Comprehensive test coverage analysis skill for measuring coverage, identifying gaps, and improving test quality.

## Core Capabilities

### Coverage Measurement

Calculate and report code coverage metrics:

**Line Coverage:**

- Percentage of lines executed by tests
- Target: > 80%
- Critical code paths: > 95%

**Branch Coverage:**

- Percentage of branches (if/else) tested
- Target: > 70%
- Catches edge cases line coverage misses

**Function Coverage:**

- Percentage of functions called by tests
- Target: > 90%
- Quick indicator of test breadth

**Running coverage:**

```bash
# Python with pytest-cov
pytest --cov=src --cov-report=html --cov-report=term-missing

# Python with coverage.py
coverage run -m pytest
coverage report -m
coverage html

# JavaScript with Jest
jest --coverage
```

### Gap Identification

Find untested code areas:

**Completely Untested Files:**

- Files with 0% coverage
- Often forgotten modules
- Priority: New features, critical paths

**Partially Tested Functions:**

- Functions with some but not all branches tested
- Missing edge cases
- Error handling paths

**Untested Code Patterns:**

```bash
# Show lines not covered
coverage report --show-missing

# List files below threshold
coverage report --fail-under=80

# JSON report for parsing
coverage json -o coverage.json
```

### Test Quality Assessment

Evaluate test effectiveness beyond coverage:

**Test-to-Code Ratio:**

- Lines of test / Lines of source
- Target: 1:1 to 2:1
- Low ratio may indicate insufficient testing

**Assertion Density:**

- Assertions per test function
- Target: > 1 per test
- Single assertion per concept (ideally)

**Test Independence:**

- Tests should not depend on each other
- No shared mutable state
- Proper setup/teardown

**Test Clarity:**

- Descriptive test names
- Clear arrange/act/assert structure
- Documented test purpose

## Coverage Analysis Workflow

### Full Coverage Analysis

1. **Run test suite**: Execute all tests with coverage
2. **Generate reports**: Create HTML and terminal reports
3. **Identify gaps**: Find untested files and lines
4. **Prioritize**: Rank gaps by risk and importance
5. **Recommend tests**: Suggest specific tests to add

### Quick Coverage Check

For rapid assessment:

1. Run coverage on changed files only
2. Compare to baseline coverage
3. Flag regressions
4. Report delta coverage

## Coverage Report Format

### Summary Report

```
Coverage Summary
================
Total Coverage: 78.5%
Target: 80.0%
Status: BELOW TARGET

By Component:
┌────────────────┬──────────┬────────┬─────────┐
│ Component      │ Lines    │ Missed │ Coverage│
├────────────────┼──────────┼────────┼─────────┤
│ api/           │ 450      │ 45     │ 90%     │
│ services/      │ 820      │ 180    │ 78%     │
│ repositories/  │ 320      │ 96     │ 70%     │
│ utils/         │ 150      │ 60     │ 60%     │
└────────────────┴──────────┴────────┴─────────┘
```

### Gap Analysis

```
Critical Gaps (Priority: High)
==============================
1. services/payment.py (45% coverage)
   - process_payment(): Lines 45-78 untested
   - refund_transaction(): Completely untested
   - Error handling: 0% coverage

2. repositories/user_repo.py (62% coverage)
   - delete_user(): Untested
   - bulk_update(): Partial coverage
```

### Test Recommendations

```
Recommended Tests
=================

1. test_payment_processing.py
   - test_successful_payment()
   - test_payment_insufficient_funds()
   - test_payment_network_error()
   - test_refund_full_amount()
   - test_refund_partial_amount()

2. test_user_repository.py
   - test_delete_user_success()
   - test_delete_user_not_found()
   - test_bulk_update_all_fields()
   - test_bulk_update_partial()
```

## Test Types and Strategies

### Unit Tests

**Coverage focus:**

- Individual functions/methods
- Edge cases and boundaries
- Error conditions

**Best practices:**

- Fast execution (< 100ms each)
- No external dependencies
- Use mocks/stubs for isolation

### Integration Tests

**Coverage focus:**

- Component interactions
- Database operations
- API endpoints

**Best practices:**

- Test real integrations
- Use test databases/containers
- Clean up after tests

### End-to-End Tests

**Coverage focus:**

- User workflows
- Critical paths
- System behavior

**Best practices:**

- Selective coverage (key flows)
- Realistic test data
- Stable test environment

## Coverage Configuration

### Python (pytest-cov)

```ini
# pytest.ini or pyproject.toml
[tool.pytest.ini_options]
addopts = "--cov=src --cov-report=term-missing --cov-fail-under=80"

[tool.coverage.run]
branch = true
source = ["src"]
omit = ["tests/*", "*/__pycache__/*"]

[tool.coverage.report]
exclude_lines = [
    "pragma: no cover",
    "if TYPE_CHECKING:",
    "raise NotImplementedError",
]
```

### JavaScript (Jest)

```json
{
  "jest": {
    "collectCoverage": true,
    "coverageThreshold": {
      "global": {
        "branches": 70,
        "functions": 80,
        "lines": 80,
        "statements": 80
      }
    },
    "coveragePathIgnorePatterns": [
      "/node_modules/",
      "/tests/"
    ]
  }
}
```

## Prioritization Framework

### Critical (Must Test)

- Payment/financial operations
- Authentication/authorization
- Data validation
- Security-sensitive code
- Core business logic

### Important (Should Test)

- User-facing features
- Data transformations
- External integrations
- Error handling paths

### Lower Priority

- Utility functions
- Configuration loading
- Logging code
- Debug/development code

## Common Coverage Issues

### False Sense of Security

**Problem:** High coverage but weak tests
**Solution:** Review assertion quality, test mutations

### Coverage Gaming

**Problem:** Tests that touch code without verifying behavior
**Solution:** Require meaningful assertions, code review

### Untestable Code

**Problem:** Code that's difficult to test
**Solution:** Refactor for testability, dependency injection

## Integration

Coordinate with other skills:

- **code-quality skill**: For test code quality
- **refactoring skill**: For improving testability
- **security-scanning skill**: For security test coverage

Overview

This skill analyzes test coverage, finds untested code, and recommends concrete tests to improve quality. It targets line, branch, and function coverage and combines metrics with test-quality signals to prioritize gaps. The output is actionable: reports, prioritized gap lists, and specific test suggestions.

How this skill works

It runs coverage tools (pytest-cov/coverage.py or jest --coverage), aggregates line/branch/function coverage, and produces HTML and machine-readable reports. It inspects reports to locate completely untested files, partially tested functions, and missing branches, then ranks findings by risk and impact. It also evaluates test quality metrics—test-to-code ratio, assertion density, and test independence—to surface weak tests.

When to use it

  • When asked to analyze test coverage or find untested code
  • Before releases to detect coverage regressions
  • When adding critical features (payments, auth, data validation)
  • During code review to check test completeness
  • To prioritize testing work after a spike in bugs

Best practices

  • Measure line, branch, and function coverage together; targets: lines >80%, branches >70%, functions >90%
  • Generate human-friendly (HTML) and machine-readable (JSON) reports for automation
  • Prioritize tests for critical business logic, security-sensitive code, and new features
  • Prefer meaningful assertions and independent tests; aim for >1 assertion per test and clear arrange/act/assert structure
  • Use test doubles for unit tests and real integrations for integration tests; isolate and clean up test state

Example use cases

  • Full coverage audit that runs the full test suite and outputs prioritized gap analysis and recommended tests
  • Quick delta check on changed files to flag coverage regressions before merge
  • Targeted recommendations for a low-coverage service (e.g., payment processing) listing missing test cases
  • Configuring CI to fail builds below coverage thresholds and emit JSON reports for dashboards
  • Guidance to refactor untestable code and improve testability through dependency injection

FAQ

What coverage thresholds should I enforce?

Use pragmatic targets: total lines >=80%, branches >=70%, functions >=90%; raise thresholds for critical components (aim >95%).

High coverage but tests still fail to catch bugs—why?

High coverage can be misleading if tests lack meaningful assertions or only exercise code paths; review assertion density, test independence, and behaviour checks.