home / skills / openclaw / skills / tdd-guide

This skill guides test-driven development by generating tests, analyzing coverage, and steering red-green-refactor cycles across Pytest, Jest, JUnit, and

npx playbooks add skill openclaw/skills --skill tdd-guide

Review the files below or copy the command above to add this skill to your agents.

Files (18)
SKILL.md
4.2 KB
---
name: tdd-guide
description: Test-driven development workflow with test generation, coverage analysis, and multi-framework support
triggers:
  - generate tests
  - analyze coverage
  - TDD workflow
  - red green refactor
  - Jest tests
  - Pytest tests
  - JUnit tests
  - coverage report
---

# TDD Guide

Test-driven development skill for generating tests, analyzing coverage, and guiding red-green-refactor workflows across Jest, Pytest, JUnit, and Vitest.

## Table of Contents

- [Capabilities](#capabilities)
- [Workflows](#workflows)
- [Tools](#tools)
- [Input Requirements](#input-requirements)
- [Limitations](#limitations)

---

## Capabilities

| Capability | Description |
|------------|-------------|
| Test Generation | Convert requirements or code into test cases with proper structure |
| Coverage Analysis | Parse LCOV/JSON/XML reports, identify gaps, prioritize fixes |
| TDD Workflow | Guide red-green-refactor cycles with validation |
| Framework Adapters | Generate tests for Jest, Pytest, JUnit, Vitest, Mocha |
| Quality Scoring | Assess test isolation, assertions, naming, detect test smells |
| Fixture Generation | Create realistic test data, mocks, and factories |

---

## Workflows

### Generate Tests from Code

1. Provide source code (TypeScript, JavaScript, Python, Java)
2. Specify target framework (Jest, Pytest, JUnit, Vitest)
3. Run `test_generator.py` with requirements
4. Review generated test stubs
5. **Validation:** Tests compile and cover happy path, error cases, edge cases

### Analyze Coverage Gaps

1. Generate coverage report from test runner (`npm test -- --coverage`)
2. Run `coverage_analyzer.py` on LCOV/JSON/XML report
3. Review prioritized gaps (P0/P1/P2)
4. Generate missing tests for uncovered paths
5. **Validation:** Coverage meets target threshold (typically 80%+)

### TDD New Feature

1. Write failing test first (RED)
2. Run `tdd_workflow.py --phase red` to validate
3. Implement minimal code to pass (GREEN)
4. Run `tdd_workflow.py --phase green` to validate
5. Refactor while keeping tests green (REFACTOR)
6. **Validation:** All tests pass after each cycle

---

## Tools

| Tool | Purpose | Usage |
|------|---------|-------|
| `test_generator.py` | Generate test cases from code/requirements | `python scripts/test_generator.py --input source.py --framework pytest` |
| `coverage_analyzer.py` | Parse and analyze coverage reports | `python scripts/coverage_analyzer.py --report lcov.info --threshold 80` |
| `tdd_workflow.py` | Guide red-green-refactor cycles | `python scripts/tdd_workflow.py --phase red --test test_auth.py` |
| `framework_adapter.py` | Convert tests between frameworks | `python scripts/framework_adapter.py --from jest --to pytest` |
| `fixture_generator.py` | Generate test data and mocks | `python scripts/fixture_generator.py --entity User --count 5` |
| `metrics_calculator.py` | Calculate test quality metrics | `python scripts/metrics_calculator.py --tests tests/` |
| `format_detector.py` | Detect language and framework | `python scripts/format_detector.py --file source.ts` |
| `output_formatter.py` | Format output for CLI/desktop/CI | `python scripts/output_formatter.py --format markdown` |

---

## Input Requirements

**For Test Generation:**
- Source code (file path or pasted content)
- Target framework (Jest, Pytest, JUnit, Vitest)
- Coverage scope (unit, integration, edge cases)

**For Coverage Analysis:**
- Coverage report file (LCOV, JSON, or XML format)
- Optional: Source code for context
- Optional: Target threshold percentage

**For TDD Workflow:**
- Feature requirements or user story
- Current phase (RED, GREEN, REFACTOR)
- Test code and implementation status

---

## Limitations

| Scope | Details |
|-------|---------|
| Unit test focus | Integration and E2E tests require different patterns |
| Static analysis | Cannot execute tests or measure runtime behavior |
| Language support | Best for TypeScript, JavaScript, Python, Java |
| Report formats | LCOV, JSON, XML only; other formats need conversion |
| Generated tests | Provide scaffolding; require human review for complex logic |

**When to use other tools:**
- E2E testing: Playwright, Cypress, Selenium
- Performance testing: k6, JMeter, Locust
- Security testing: OWASP ZAP, Burp Suite

Overview

This skill implements a practical test-driven development (TDD) workflow that generates tests, analyzes coverage reports, and supports multiple frameworks (Jest, Pytest, JUnit, Vitest). It provides tooling to guide red-green-refactor cycles, surface coverage gaps, and scaffold realistic fixtures and mocks. The focus is on accelerating safe, repeatable TDD across Python, JavaScript/TypeScript, and Java projects.

How this skill works

Feed source code or requirements and select a target framework; the test generator produces structured test stubs covering happy paths, errors, and edge cases. Coverage analyzer parses LCOV/JSON/XML reports to prioritize uncovered code and suggest missing tests. The TDD workflow tooling validates RED/GREEN/REFACTOR phases, while adapters and fixture generators convert tests between frameworks and create realistic test data.

When to use it

  • Starting a new feature using red-first TDD to keep scope minimal and verifiable
  • When migrating tests between frameworks (e.g., Jest ↔ Pytest) to standardize suites
  • After running tests to analyze coverage reports and prioritize gaps to meet thresholds
  • To automatically scaffold test cases and fixtures for legacy code under test
  • When you need a lightweight quality score to detect test smells and weak assertions

Best practices

  • Treat generated tests as scaffolding: review and refine assertions and edge cases
  • Keep TDD cycles short: write a failing test, implement minimal code, then refactor
  • Set realistic coverage targets (e.g., 80%+) and prioritize P0 paths first
  • Use framework_adapter only for automated conversion; manually verify behavioral parity
  • Limit scope of generated fixtures to realistic, deterministic data to avoid brittle tests

Example use cases

  • Generate Pytest stubs from a new Python module and iterate via red-green-refactor
  • Parse an LCOV report to identify P0 functions missing unit tests and auto-generate stubs
  • Convert a Jest test suite to Pytest as part of a gradual backend rewrite
  • Run TDD workflow validation during CI to ensure each PR follows red-green-refactor
  • Create factories and mocks for complex domain entities to speed integration of unit tests

FAQ

Can this skill run my tests or measure runtime behavior?

No. The tooling performs static generation and analysis; executing tests and measuring runtime must be done with your test runner and CI environment.

Which coverage formats are supported?

LCOV, JSON, and XML coverage reports are supported. Other formats need conversion before analysis.