home / skills / wdm0006 / python-skills / testing-strategy
This skill designs and implements pytest test suites for Python libraries, including fixtures, parametrization, mocking, and CI configuration to boost coverage.
npx playbooks add skill wdm0006/python-skills --skill testing-strategyReview the files below or copy the command above to add this skill to your agents.
---
name: testing-python-libraries
description: Designs and implements pytest test suites for Python libraries with fixtures, parametrization, mocking, Hypothesis property-based testing, and CI configuration. Use when creating tests, improving coverage, setting up testing infrastructure, or implementing property-based testing.
---
# Python Library Testing
## Quick Start
```bash
pytest # Run tests
pytest --cov=my_library # With coverage
pytest -x # Stop on first failure
pytest -k "test_encode" # Run matching tests
```
## Pytest Configuration
```toml
# pyproject.toml
[tool.pytest.ini_options]
testpaths = ["tests"]
addopts = "-ra -q --cov=my_library --cov-fail-under=85"
[tool.coverage.run]
branch = true
source = ["src/my_library"]
```
## Test Structure
```
tests/
├── conftest.py # Shared fixtures
├── test_encoding.py
└── test_decoding.py
```
## Essential Patterns
**Basic test:**
```python
def test_encode_valid_input():
result = encode(37.7749, -122.4194)
assert isinstance(result, str)
assert len(result) == 12
```
**Parametrization:**
```python
@pytest.mark.parametrize("lat,lon,expected", [
(37.7749, -122.4194, "9q8yy"),
(40.7128, -74.0060, "dr5ru"),
])
def test_known_values(lat, lon, expected):
assert encode(lat, lon, precision=5) == expected
```
**Fixtures:**
```python
@pytest.fixture
def sample_data():
return [(37.7749, -122.4194), (40.7128, -74.0060)]
def test_batch(sample_data):
results = batch_encode(sample_data)
assert len(results) == 2
```
**Mocking:**
```python
def test_api_call(mocker):
mocker.patch("my_lib.client.fetch", return_value={"data": []})
result = my_lib.get_data()
assert result == []
```
**Exception testing:**
```python
def test_invalid_raises():
with pytest.raises(ValueError, match="latitude"):
encode(91.0, 0.0)
```
For detailed patterns, see:
- **[FIXTURES.md](FIXTURES.md)** - Advanced fixture patterns
- **[HYPOTHESIS.md](HYPOTHESIS.md)** - Property-based testing
- **[CI.md](CI.md)** - CI/CD test configuration
## Test Principles
| Principle | Meaning |
|-----------|---------|
| Independent | No shared state between tests |
| Deterministic | Same result every run |
| Fast | Unit tests < 100ms each |
| Focused | Test behavior, not implementation |
## Checklist
```
Testing:
- [ ] Tests exist for public API
- [ ] Edge cases covered (empty, boundary, error)
- [ ] No external service dependencies (mock them)
- [ ] Coverage > 85%
- [ ] Tests run in CI
```
## Learn More
This skill is based on the [Code Quality](https://mcginniscommawill.com/guides/python-library-development/#code-quality-the-foundation) section of the [Guide to Developing High-Quality Python Libraries](https://mcginniscommawill.com/guides/python-library-development/) by [Will McGinnis](https://mcginniscommawill.com/).
This skill designs and implements pytest test suites for Python libraries, covering fixtures, parametrization, mocking, Hypothesis property-based tests, and CI test configuration. It focuses on creating reliable, fast, and focused unit tests that improve coverage and catch regressions early. Use it to add tests, harden behavior, or introduce property-based testing to a codebase.
I analyze the public API and key behaviors, then generate pytest modules with clear structure: shared fixtures in conftest, focused test files, parametrized cases for known inputs, and Hypothesis strategies for properties. I include mocking examples for external dependencies, exception and edge-case tests, and a CI-ready pytest configuration with coverage enforcement. Outputs are runnable test code, fixture patterns, and CI snippets you can drop into your repo.
How do I test code that calls external services?
Mock external clients or HTTP calls in tests. Use pytest-mock or requests-mock to replace network interactions and return deterministic responses so tests stay fast and isolated.
When should I use Hypothesis versus parametrized tests?
Use parametrization for documented, specific examples and regression cases. Use Hypothesis when you want wide automated input exploration to find edge cases and validate general invariants across many inputs.