home / skills / eyadsibai / ltk / pytest

This skill helps you write effective pytest tests by explaining fixtures, parametrize, mocking, and coverage strategies.

npx playbooks add skill eyadsibai/ltk --skill pytest

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.8 KB
---
name: Pytest
description: This skill should be used when the user asks about "pytest", "writing tests", "test fixtures", "parametrize", "mocking", "test coverage", "conftest"
version: 1.0.0
---

# Pytest Testing

Guidance for writing effective tests with pytest.

## Core Concepts

### Fixtures

Reusable test setup/teardown. Use `yield` for cleanup.

**Scopes:**

| Scope | Runs | Use Case |
|-------|------|----------|
| `function` | Each test | Default, isolated |
| `class` | Once per class | Shared state in class |
| `module` | Once per file | Expensive setup |
| `session` | Once total | DB connections, servers |

**Key concept**: `conftest.py` makes fixtures available to all tests in directory.

---

### Parametrize

Test multiple inputs without repeating code.

**Key concept**: Generates separate test for each parameter set - failures are isolated.

**Use `pytest.param()` for:**

- Custom test IDs
- Expected failures (`marks=pytest.mark.xfail`)
- Conditional skips

---

### Markers

Tag tests for filtering and special behavior.

| Built-in | Purpose |
|----------|---------|
| `@pytest.mark.skip` | Always skip |
| `@pytest.mark.skipif(condition)` | Conditional skip |
| `@pytest.mark.xfail` | Expected failure |
| `@pytest.mark.asyncio` | Async test (with pytest-asyncio) |

**Custom markers**: Define in `pyproject.toml`, run with `pytest -m "marker"`

---

### Mocking

| Tool | Use Case |
|------|----------|
| `Mock()` | Create fake object |
| `patch()` | Replace import |
| `MagicMock()` | Mock with magic methods |
| `AsyncMock()` | Mock async functions |

**Key concept**: Patch where the thing is *used*, not where it's *defined*.

---

## Test Organization

```
tests/
├── conftest.py          # Shared fixtures
├── unit/                # Fast, isolated
├── integration/         # Multiple components
└── e2e/                 # Full system
```

---

## Coverage

**Key thresholds:**

- 80%+ is good coverage
- 100% is often overkill
- Branch coverage catches conditionals

**Config in `pyproject.toml`**: Set `source`, `branch=true`, `fail_under`

---

## Common Commands

| Command | Purpose |
|---------|---------|
| `pytest -v` | Verbose output |
| `pytest -x` | Stop on first failure |
| `pytest -k "pattern"` | Filter by name |
| `pytest -m "marker"` | Filter by marker |
| `pytest -s` | Show print output |
| `pytest -n auto` | Parallel (pytest-xdist) |
| `pytest --cov=src` | With coverage |

---

## Best Practices

| Practice | Why |
|----------|-----|
| One assertion per test | Clear failures |
| Descriptive test names | Self-documenting |
| Arrange-Act-Assert | Consistent structure |
| Test behavior, not implementation | Refactor-proof |
| Use factories for test data | Maintainable |
| Keep tests fast | Run often |

## Resources

- Pytest docs: <https://docs.pytest.org/>
- pytest-asyncio: <https://pytest-asyncio.readthedocs.io/>

Overview

This skill provides practical guidance for writing and organizing tests with pytest, covering fixtures, parametrization, markers, mocking, coverage, and common commands. It focuses on patterns that keep tests fast, readable, and maintainable. Use it to improve test reliability and developer productivity in Python projects.

How this skill works

The skill explains core pytest features and when to apply them: fixtures for reusable setup/teardown with configurable scopes, parametrize to run a test across input sets, and markers for filtering or special behavior. It also covers mocking strategies (patch where used), test organization conventions, and configuring coverage thresholds in pyproject.toml. Commands and examples are included to run and debug tests efficiently.

When to use it

  • You need reusable setup/teardown across tests or directories (use fixtures and conftest.py).
  • You want to run the same test logic against multiple inputs (use @pytest.mark.parametrize).
  • You need to tag or skip tests conditionally (use markers and skip/xfail).
  • You must replace dependencies in tests or simulate async behavior (use patch, Mock, AsyncMock).
  • You want reliable metrics for quality gates (configure coverage with fail_under and branch=true).

Best practices

  • Keep tests fast and isolated; prefer function-scoped fixtures unless sharing is needed.
  • Write descriptive test names and follow Arrange-Act-Assert structure for clarity.
  • Test behavior not implementation to allow safe refactoring.
  • Use pytest.param for custom IDs, conditional skips, or expected failures.
  • Define reusable fixtures in conftest.py and keep expensive setup at module or session scope.
  • Aim for ~80% coverage; use branch coverage for conditionals but avoid chasing 100% at the cost of value.

Example use cases

  • Unit tests for a business logic module using function-scoped fixtures and mocks for external services.
  • Parametrized validation tests that run one assertion per parameter set to isolate failures.
  • Integration tests that use module- or session-scoped fixtures to spin up a test database or server.
  • Async code tests using pytest-asyncio with AsyncMock to simulate async dependencies.
  • CI pipeline that fails the build if coverage drops below a configured threshold in pyproject.toml.

FAQ

When should I use conftest.py versus local fixtures?

Put fixtures that are reused across multiple test modules in conftest.py. Keep test-local fixtures in the test file to limit scope and improve clarity.

How do I mock where something is used?

Patch the import path used by the code under test, not where the dependency is defined. Use patch() or MonkeyPatch to replace objects at the usage site.