home / skills / nickcrew / claude-cortex / test-generation

test-generation skill

/skills/test-generation

This skill helps generate targeted tests for Python code by analyzing paths, edge cases, and coverage goals to improve reliability.

npx playbooks add skill nickcrew/claude-cortex --skill test-generation

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
1.1 KB
---
name: test-generation
description: Use when generating tests for new or existing code to improve coverage - provides a structured workflow for analyzing code, creating tests, and validating coverage goals.
---

# Test Generation

## Overview
Generate tests systematically by analyzing code paths, covering edge cases, and validating coverage targets.

## When to Use
- Creating tests for new features
- Improving coverage in weak areas
- Building regression or integration test suites

Avoid when:
- The task is only running existing tests (use dev-workflows)

## Quick Reference

| Task | Load reference |
| --- | --- |
| Test generation workflow | `skills/test-generation/references/generate-tests.md` |

## Workflow
1. Identify target scope and test type.
2. Load the test generation reference.
3. Analyze code paths and edge cases.
4. Generate tests and validate coverage.
5. Summarize results and gaps.

## Output
- Generated tests
- Coverage report and follow-ups

## Common Mistakes
- Writing tests without understanding code paths
- Ignoring edge cases or failure modes

Overview

This skill helps generate tests for new or existing Python code to boost coverage and reduce regressions. It provides a structured workflow to analyze code paths, surface edge cases, and produce runnable test files plus coverage reports. Use it to create focused unit, integration, or regression tests with measurable coverage goals.

How this skill works

You define a target scope and test type, then load the test-generation reference to guide analysis. The skill inspects code paths, identifies edge cases and failure modes, generates test code, runs tests, and validates coverage against your targets. It produces test files, a coverage report, and a short summary of gaps and follow-ups.

When to use it

  • Creating tests for a new feature or module
  • Improving coverage in areas with low test density
  • Building regression or integration test suites after refactors
  • Automating creation of edge-case and failure-mode tests
  • Preparing a test plan before a release or handoff

Best practices

  • Start by scoping the smallest meaningful unit to test and expand iteratively
  • Prioritize edge cases, error handling, and boundary conditions during analysis
  • Run generated tests locally and review assertions for clarity and intent
  • Set realistic coverage targets per module rather than a single global threshold
  • Include clear follow-up tasks for uncovered complex code paths

Example use cases

  • Generate unit tests for a newly added Python module to reach a 70% coverage goal
  • Create integration tests that exercise API error handling and retry logic
  • Produce regression tests after a refactor to lock in expected behavior
  • Automatically surface and test boundary conditions in numeric or date handling code
  • Build a baseline test suite for legacy code to guide incremental hardening

FAQ

Can this skill run tests or only generate them?

It generates tests and can run them to produce a coverage report so you can validate targets and iterate.

What test frameworks are supported?

The workflow targets common Python test frameworks; adjust generated boilerplate to match pytest, unittest, or your in-house conventions.