home / skills / 89jobrien / steve / tdd-pytest

tdd-pytest skill

/steve/skills/tdd-pytest

This skill guides you through test-driven development with pytest, auditing tests, running coverage, and generating reports for robust Python projects.

npx playbooks add skill 89jobrien/steve --skill tdd-pytest

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
4.7 KB
---
name: tdd-pytest
description: Python/pytest TDD specialist for test-driven development workflows. Use
  when writing tests, auditing test quality, running pytest, or generating test reports.
  Integrates with uv and pyproject.toml configuration.
author: Joseph OBrien
status: unpublished
updated: '2025-12-23'
version: 1.0.1
tag: skill
type: skill
---

# TDD-Pytest Skill

Activate this skill when the user needs help with:

- Writing tests using TDD methodology (Red-Green-Refactor)
- Auditing existing pytest test files for quality
- Running tests with coverage
- Generating test reports to `TESTING_REPORT.local.md`
- Setting up pytest configuration in `pyproject.toml`

## TDD Workflow

### Red-Green-Refactor Cycle

1. **RED** - Write a failing test first
   - Test should fail for the right reason (not import errors)
   - Test should be minimal and focused
   - Show the failing test output

2. **GREEN** - Write minimal code to pass
   - Only implement what's needed to pass the test
   - No premature optimization
   - Show the passing test output

3. **REFACTOR** - Improve code while keeping tests green
   - Clean up duplication
   - Improve naming
   - Extract functions/classes if needed
   - Run tests after each change

## Test Organization

### File Structure

```text
project/
  src/
    module.py
  tests/
    conftest.py          # Shared fixtures
    test_module.py       # Tests for module.py
  pyproject.toml         # Pytest configuration
```

### Naming Conventions

- Test files: `test_*.py` or `*_test.py`
- Test functions: `test_*`
- Test classes: `Test*`
- Fixtures: Descriptive names (`mock_database`, `sample_user`)

## Pytest Best Practices

### Fixtures

```python
import pytest

@pytest.fixture
def sample_config():
    return {"key": "value"}

@pytest.fixture
def mock_client(mocker):
    return mocker.MagicMock()
```

### Parametrization

```python
@pytest.mark.parametrize("input,expected", [
    ("hello", "HELLO"),
    ("world", "WORLD"),
    ("", ""),
])
def test_uppercase(input, expected):
    assert input.upper() == expected
```

### Async Tests

```python
import pytest

@pytest.mark.asyncio
async def test_async_function():
    result = await async_operation()
    assert result == expected
```

### Exception Testing

```python
def test_raises_value_error():
    with pytest.raises(ValueError, match="invalid input"):
        process_input(None)
```

## Running Tests

### With uv

```bash
uv run pytest                              # Run all tests
uv run pytest tests/test_module.py         # Run specific file
uv run pytest -k "test_name"               # Run by name pattern
uv run pytest -v --tb=short                # Verbose with short traceback
uv run pytest --cov=src --cov-report=term  # With coverage
```

### Common Flags

- `-v` / `--verbose` - Detailed output
- `-x` / `--exitfirst` - Stop on first failure
- `--tb=short` - Short tracebacks
- `--tb=no` - No tracebacks
- `-k EXPR` - Run tests matching expression
- `-m MARKER` - Run tests with marker
- `--cov=PATH` - Coverage for path
- `--cov-report=term-missing` - Show missing lines

## pyproject.toml Configuration

### Minimal Setup

```toml
[tool.pytest.ini_options]
asyncio_mode = "auto"
testpaths = ["tests"]
```

### Full Configuration

```toml
[tool.pytest.ini_options]
asyncio_mode = "auto"
asyncio_default_fixture_loop_scope = "function"
testpaths = ["tests"]
python_files = ["test_*.py", "*_test.py"]
python_functions = ["test_*"]
python_classes = ["Test*"]
addopts = "-v --tb=short"
markers = [
    "slow: marks tests as slow",
    "integration: marks integration tests",
]
filterwarnings = [
    "ignore::DeprecationWarning",
]

[tool.coverage.run]
source = ["src"]
branch = true
omit = ["tests/*", "*/__init__.py"]

[tool.coverage.report]
exclude_lines = [
    "pragma: no cover",
    "if TYPE_CHECKING:",
    "raise NotImplementedError",
]
fail_under = 80
show_missing = true
```

## Report Generation

The `TESTING_REPORT.local.md` file should contain:

1. Test execution summary (passed/failed/skipped)
2. Coverage metrics by module
3. Audit findings by severity
4. Recommendations with file:line references
5. Evidence (command outputs)

## Integration with Conversation

When the user asks to write tests:

1. Check conversation history for context about what to test
2. Identify the code/feature being discussed
3. If unclear, ask clarifying questions:
   - "What specific behavior should I test?"
   - "Should I include edge cases for X?"
   - "Do you want unit tests, integration tests, or both?"
4. Follow TDD: Write failing test first, then implement

## Commands Available

- `/tdd-pytest:init` - Initialize pytest configuration
- `/tdd-pytest:test [path]` - Write tests using TDD (context-aware)
- `/tdd-pytest:test-all` - Run all tests
- `/tdd-pytest:report` - Generate/update TESTING_REPORT.local.md

Overview

This skill is a Python/pytest TDD specialist that guides test-driven development workflows and automates pytest tasks. It helps write failing-first tests, implement minimal code to pass them, and refactor while keeping test suites green. It also audits test quality, runs tests with coverage via uv, and generates a TESTING_REPORT.local.md report.

How this skill works

The skill inspects project layout, tests, and pyproject.toml to suggest or create tests and pytest configuration. It can produce failing tests (RED), implement minimal fixes (GREEN), and propose refactors while re-running pytest after each change. It runs tests through uv, collects coverage, and compiles an audit and evidence into TESTING_REPORT.local.md.

When to use it

  • Starting a new feature using red-green-refactor TDD
  • Improving or auditing an existing pytest suite for quality issues
  • Configuring pytest and coverage in pyproject.toml
  • Running tests and coverage via uv and generating a test report
  • Automating test runs and regression checks before merges

Best practices

  • Write a focused failing test first; ensure failure is for the intended reason
  • Keep tests small and descriptive; name files test_*.py or *_test.py and functions test_*
  • Use fixtures for shared setup and descriptive names (e.g., mock_client, sample_user)
  • Parametrize repetitive scenarios and use async markers for async functions
  • Run pytest frequently during refactor steps and prefer minimal implementations to pass tests

Example use cases

  • Create a failing unit test for a new function and implement the smallest passing code
  • Audit tests to find flaky cases, missing assertions, and coverage gaps then produce remediation steps
  • Add asyncio support and configure test discovery in pyproject.toml
  • Run uv run pytest --cov and generate TESTING_REPORT.local.md summarizing results and recommendations
  • Add parametrized tests for input/output combinations and convert duplicated setups into fixtures

FAQ

Do you write production code or only tests?

I follow TDD: I write the failing test first, then suggest minimal production code to make it pass, and finally recommend refactors. You can accept or modify the implementation steps.

How is the TESTING_REPORT.local.md structured?

It includes a test execution summary, per-module coverage metrics, audit findings with severity and file:line references, recommendations, and command output evidence.