home / skills / greyhaven-ai / claude-code-config / tdd-python

This skill helps you implement Python features using strict red-green-refactor TDD with pytest, FastAPI, and Pydantic testing patterns.

npx playbooks add skill greyhaven-ai/claude-code-config --skill tdd-python

Review the files below or copy the command above to add this skill to your agents.

Files (3)
SKILL.md
1.4 KB
---
name: grey-haven-tdd-python
description: "Python Test-Driven Development expertise with pytest, strict red-green-refactor methodology, FastAPI testing patterns, and Pydantic model testing. Use when implementing Python features with TDD, writing pytest tests, testing FastAPI endpoints, developing with test-first approach, or when user mentions 'Python TDD', 'pytest', 'FastAPI testing', 'red-green-refactor', 'Python unit tests', 'test-driven Python', or 'Python test coverage'."
# v2.0.43: Skills to auto-load for subagents spawned from this skill
skills:
  - grey-haven-code-style
  - grey-haven-api-design-standards
  - grey-haven-test-generation
# v2.0.74: Restrict tools available when this skill is active
allowed-tools:
  - Read
  - Write
  - MultiEdit
  - Bash
  - Grep
  - Glob
  - TodoWrite
---

# TDD Python Skill

Python Test-Driven Development following strict red-green-refactor cycle with pytest and comprehensive coverage.

## Description

Systematic Python implementation using TDD methodology, ensuring tests written first and driving design decisions.

## What's Included

- **Examples**: Python TDD cycles, FastAPI TDD, Pydantic model TDD
- **Reference**: pytest patterns, Python testing best practices
- **Templates**: pytest templates, TDD workflows

## Use When

- Implementing Python features with TDD
- FastAPI development
- Pydantic model development

## Related Agents

- `tdd-python-implementer`

**Skill Version**: 1.0

Overview

This skill provides practical Python Test-Driven Development (TDD) expertise focused on pytest, strict red-green-refactor cycles, FastAPI endpoint testing, and Pydantic model validation. It guides test-first design choices and supplies templates and patterns to maintain high test coverage and clear, maintainable code. Use it to drive feature design from tests and to standardize testing practices across Python projects.

How this skill works

The skill inspects user intent and code context to generate test-first artifacts: failing pytest tests, minimal implementation code, and refactor suggestions to reach green status. It produces pytest-friendly test cases for unit logic, FastAPI endpoints, and Pydantic models, plus fixtures and mocks where appropriate. It also recommends next refactor steps and asserts coverage and edge-case handling during the cycle.

When to use it

  • When implementing new Python features and you want a strict TDD workflow
  • When writing or improving pytest tests for libraries or applications
  • When developing or testing FastAPI endpoints and integrations
  • When validating Pydantic models, schemas, and data parsing behavior
  • When you want clear red-green-refactor guidance to drive design decisions

Best practices

  • Write a single failing test that expresses the smallest new behavior before coding
  • Use pytest fixtures and parametrization to reduce duplication and cover edge cases
  • Keep implementation minimal to pass tests, then refactor for readability and reusability
  • Test FastAPI endpoints with TestClient and include status, body schema, and authentication cases
  • Validate Pydantic models with positive and negative input examples and explicit error assertions

Example use cases

  • Generate a pytest test that defines desired behavior for a new utility function, then produce the minimal implementation to pass it
  • Create end-to-end FastAPI tests for an endpoint including request validation, response schema, and error cases
  • Write Pydantic model tests that assert parsing, default values, and validation errors for invalid input
  • Convert existing feature requests into an ordered TDD task list: failing test, implementation, refactor, and coverage checks
  • Provide concise pytest templates and fixtures to bootstrap a test suite for a new Python package

FAQ

Do you produce full implementations or just tests?

I produce failing tests first, then minimal implementation code to pass them, followed by refactor suggestions—consistent with strict red-green-refactor TDD.

Can you generate FastAPI test scaffolding with authentication?

Yes. I generate TestClient-based tests, include auth setup fixtures, and test protected routes for success and failure cases.