home / skills / hoangnguyen0403 / agent-skills-standard / tdd

tdd skill

/skills/common/tdd

This skill enforces test-driven development by guiding the red-green-refactor cycle and ensuring failing tests precede production code.

npx playbooks add skill hoangnguyen0403/agent-skills-standard --skill tdd

Review the files below or copy the command above to add this skill to your agents.

Files (5)
SKILL.md
1.3 KB
---
name: tdd
description: Enforces Test-Driven Development (Red-Green-Refactor) for rigorous code quality.
---

# Test-Driven Development (TDD)

## **Priority: P1 (OPERATIONAL)**

## **The Iron Law**

> **NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST.**
> If you wrote code before the test: **Delete it.** Start over. No "adapting" or keeping as reference.

## **The TDD Cycle**

1. **RED**: Write a minimal failing test. **Verify failure** (Expected error, not typo).
2. **GREEN**: Write the simplest code to pass. **Verify pass** (Pristine output).
3. **REFACTOR**: Clean up code while staying green.

## **Core Principles**

- **Watch it Fail**: If you didn't see it fail, you didn't prove the test works.
- **Minimalism**: Don't add features/options beyond the current test (YAGNI).
- **Real Over Mock**: Prefer real dependencies unless they are slow/flaky. Avoid [Anti-Patterns](references/testing_anti_patterns.md).

## **Verification Checklist**

- [ ] Every new function/method has a failing test first?
- [ ] Failure message was expected (feature missing, not setup error)?
- [ ] Minimal code implemented (no over-engineering)?
- [ ] [Common Pitfalls](references/testing_anti_patterns.md) avoided?

## **Expert References**

- [TDD Patterns & Discovery Protocols](references/tdd_patterns.md)
- [Testing Anti-Patterns (Safety First)](references/testing_anti_patterns.md)

Overview

This skill enforces Test-Driven Development (TDD) using the Red-Green-Refactor cycle to keep codebases reliable and maintainable. It insists on a failing test before any production code and provides a verification checklist to prevent common testing anti-patterns. The guidance applies across languages and frameworks focused in this collection.

How this skill works

The skill inspects development workflows and prompts agents to write a minimal failing test first (RED), implement the simplest code to pass (GREEN), then refactor while keeping tests green. It verifies that tests actually failed as expected, that implementations are minimal, and that real dependencies are preferred over unnecessary mocks. A short checklist enforces discipline and highlights common pitfalls to avoid.

When to use it

  • When introducing new features or functions in any supported language or framework.
  • During bug fixes to drive precise, reproducible tests before implementation.
  • When onboarding contributors to ensure consistent testing habits.
  • Prior to refactoring to guarantee behavior is preserved.
  • When building critical logic where regressions are costly.

Best practices

  • Always start by writing a minimal failing test and confirm it fails for the expected reason.
  • Implement only enough code to make the test pass; avoid speculative features (YAGNI).
  • Prefer real dependencies in tests; mock only when dependencies are slow, flaky, or external.
  • Run the verification checklist for every change: failing test first, expected failure message, minimal implementation, and anti-pattern avoidance.
  • Refactor aggressively while running the test suite frequently to keep feedback tight.

Example use cases

  • Add a new API endpoint: write failing integration/unit test, implement handler, refactor for clarity.
  • Fix a bug: write a test that reproduces the bug, implement the fix, then clean up code paths.
  • Refactor a module: add regression tests first, run them to fail, refactor while tests stay green.
  • Onboard a teammate: enforce TDD steps during pair programming to teach the workflow.
  • Continuous improvement: integrate TDD checks into PR reviews to ensure compliance.

FAQ

What if I already wrote production code before a test?

Delete or revert the production code and restart the cycle with a failing test. Keeping pre-written code undermines the verification that the test proves behavior.

When are mocks acceptable?

Use mocks when external dependencies are slow, unstable, or impossible to run in the test environment. Prefer real dependencies otherwise to avoid false confidence.