home / skills / shotaiuchi / dotclaude / feature-test

feature-test skill

/dotclaude/skills/feature-test

This skill helps you generate and validate comprehensive unit, integration, and edge-case tests for new features, ensuring robust test fixtures.

npx playbooks add skill shotaiuchi/dotclaude --skill feature-test

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
1.7 KB
---
name: feature-test
description: >-
  Test creation for new features. Apply when creating unit tests,
  integration tests, edge case coverage, and test fixtures for
  newly implemented functionality.
user-invocable: false
---

# Test Writer Implementation

Create tests for newly implemented feature functionality.

## Implementation Checklist

### Unit Test Coverage
- Write tests for all public methods and functions
- Verify happy path scenarios produce expected results
- Check that each test validates a single behavior
- Ensure proper use of mocks and stubs for dependencies
- Validate test naming follows project conventions

### Integration Tests
- Write tests for component interaction boundaries
- Verify API endpoint request/response contracts
- Check database operations with real or in-memory stores
- Ensure external service integrations use test doubles
- Validate end-to-end workflows across layers

### Edge Case Coverage
- Test boundary values (zero, empty, max, min)
- Verify null and undefined input handling
- Check concurrent access and race condition scenarios
- Ensure error paths return appropriate failures
- Validate timeout and retry behavior

### Test Data & Fixtures
- Create reusable test fixtures and factories
- Verify test data represents realistic scenarios
- Check for proper test isolation (no shared mutable state)
- Ensure setup and teardown clean up resources
- Validate test determinism (no flaky dependencies)

## Output Format

Report implementation status:

| Status | Description |
|--------|-------------|
| Complete | Fully implemented and verified |
| Partial | Implementation started, needs remaining work |
| Blocked | Cannot proceed due to dependency or decision needed |
| Skipped | Not applicable to this feature |

Overview

This skill helps create and verify tests for newly implemented features. It guides writing unit and integration tests, covering edge cases, and building reusable fixtures to ensure deterministic, maintainable test suites.

How this skill works

The skill inspects the new feature surface and produces a checklist-driven test plan that maps public methods, endpoints, and interactions to test cases. It recommends test doubles, fixtures, and environment choices, then reports implementation status using a simple Complete/Partial/Blocked/Skipped rubric. It focuses on single-behavior tests, realistic test data, and isolation to minimize flakiness.

When to use it

  • After implementing new functions, classes, or API endpoints
  • When adding integration points between components or services
  • While preparing for release to ensure edge cases are covered
  • When creating or refactoring test fixtures and shared factories
  • If tests are flaky or nondeterministic and need stabilization

Best practices

  • Write one assertion intent per test and name tests to reflect expected behavior
  • Mock external dependencies; use in-memory stores or test doubles for integration work
  • Cover boundary values, null/undefined inputs, and error paths explicitly
  • Create reusable fixtures and factories; ensure setup/teardown removes shared state
  • Prefer deterministic timing and avoid sleeping; simulate timeouts and retries where possible

Example use cases

  • Unit tests for all public methods in a new library module with mocks for IO
  • Integration tests validating API request/response contracts and database changes
  • Edge-case tests for concurrency, race conditions, and large input bounds
  • Reusable fixtures for user records to speed up test creation across suites
  • Test plan report that marks each checklist item as Complete, Partial, Blocked, or Skipped

FAQ

How do I choose between a mock and an in-memory store?

Prefer mocks for isolated unit tests and in-memory stores for integration-style verification where persistence behavior matters.

What indicates a flaky test to address first?

Intermittent failures, time-dependent assertions, or shared mutable state are common flaky signs; prioritize stabilizing these by isolating state and simulating time.