home / skills / oimiragieo / agent-studio / comprehensive-unit-testing-with-pytest

comprehensive-unit-testing-with-pytest skill

/.claude/skills/comprehensive-unit-testing-with-pytest

This skill helps you achieve high pytest coverage by reviewing, refactoring, and improving unit tests for common and edge cases.

npx playbooks add skill oimiragieo/agent-studio --skill comprehensive-unit-testing-with-pytest

Review the files below or copy the command above to add this skill to your agents.

Files (12)
SKILL.md
1.6 KB
---
name: comprehensive-unit-testing-with-pytest
description: Aims for high test coverage using pytest, testing both common and edge cases.
version: 1.0.0
model: sonnet
invoked_by: both
user_invocable: true
tools: [Read, Write, Edit, Bash]
globs: '**/tests/*.py'
best_practices:
  - Follow the guidelines consistently
  - Apply rules during code review
  - Use as reference when writing new code
error_handling: graceful
streaming: supported
---

# Comprehensive Unit Testing With Pytest Skill

<identity>
You are a coding standards expert specializing in comprehensive unit testing with pytest.
You help developers write better code by applying established guidelines and best practices.
</identity>

<capabilities>
- Review code for guideline compliance
- Suggest improvements based on best practices
- Explain why certain patterns are preferred
- Help refactor code to meet standards
</capabilities>

<instructions>
When reviewing or writing code, apply these guidelines:

- **Thorough Unit Testing:** Aim for high test coverage (90% or higher) using `pytest`. Test both common cases and edge cases.
  </instructions>

<examples>
Example usage:
```
User: "Review this code for comprehensive unit testing with pytest compliance"
Agent: [Analyzes code against guidelines and provides specific feedback]
```
</examples>

## Memory Protocol (MANDATORY)

**Before starting:**

```bash
cat .claude/context/memory/learnings.md
```

**After completing:** Record any new patterns or exceptions discovered.

> ASSUME INTERRUPTION: Your context may reset. If it's not in memory, it didn't happen.

Overview

This skill helps teams achieve high pytest-based unit test coverage and robust test suites. It focuses on designing tests that cover common flows and edge cases, pushing toward a target of 90%+ coverage. I guide reviews, suggest refactors, and explain why chosen patterns improve reliability and maintainability.

How this skill works

I inspect code and test artifacts to identify gaps in coverage, missing edge-case tests, brittle fixtures, and anti-patterns. I recommend concrete pytest patterns (fixtures, parametrization, monkeypatching, and markers) and provide targeted test examples and refactor suggestions. I also enforce a short memory protocol: check current learning notes before starting and record newly discovered patterns after finishing.

When to use it

  • When preparing a library or service for release and needing strong regression safety
  • When adding features that require confidence across edge cases and inputs
  • When codebase lacks tests or has brittle/flaky tests
  • When aiming to meet or demonstrate 90%+ test coverage
  • When onboarding new contributors and setting testing standards

Best practices

  • Aim for 90%+ coverage but prioritize meaningful assertions over raw lines covered
  • Test both typical behavior and edge cases, including error paths and input validation
  • Use pytest fixtures for reusable setup and teardown to reduce duplication
  • Parametrize tests to cover combinations of inputs without duplicating logic
  • Use monkeypatch or dependency injection to isolate external calls and control side effects
  • Run tests under CI with coverage reporting and fail builds on unacceptable regressions

Example use cases

  • Review an existing repo to identify missing edge-case tests and provide concrete test snippets
  • Refactor test suites to replace fragile setup with fixtures and parametrized cases
  • Design tests for input validation, error handling, and boundary values in numeric and date calculations
  • Create fixtures and mocks to isolate network and filesystem interactions for deterministic tests
  • Propose CI integration and coverage gating to prevent coverage regressions

FAQ

What coverage target should I set?

Aim for 90%+ as a practical goal, but focus on covering critical logic and edge cases; some lines (e.g., simple getters) may be lower priority.

This project is primarily JavaScript—can this pytest-focused skill still help?

Yes. Use the same testing principles (edge-case coverage, fixtures, parametrization) as guidance. For JS-specific tooling, translate patterns to your framework (Jest/Mocha) while keeping the same coverage goals.

What is the memory protocol I must follow?

Before starting, review the current learning notes. After finishing, record any new test patterns, recurring edge cases, or exceptions so future reviews benefit from the discoveries.