home / skills / thibautbaissac / rails_ai_agents / tdd-cycle

tdd-cycle skill

/skills/tdd-cycle

This skill guides you through the Test-Driven Development cycle, helping you write failing tests first and refactor with confidence.

npx playbooks add skill thibautbaissac/rails_ai_agents --skill tdd-cycle

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
6.2 KB
---
name: tdd-cycle
description: Guides Test-Driven Development workflow with Red-Green-Refactor cycle. Use when the user wants to implement a feature using TDD, write tests first, follow test-driven practices, or mentions red-green-refactor.
allowed-tools: Read, Write, Edit, Bash
---

# TDD Cycle Skill

## Overview

This skill guides you through the Test-Driven Development cycle:
1. **RED**: Write a failing test that describes desired behavior
2. **GREEN**: Write minimal code to pass the test
3. **REFACTOR**: Improve code while keeping tests green

## Workflow Checklist

Copy and track progress:

```
TDD Progress:
- [ ] Step 1: Understand the requirement
- [ ] Step 2: Choose test type (unit/request/system)
- [ ] Step 3: Write failing spec (RED)
- [ ] Step 4: Verify spec fails correctly
- [ ] Step 5: Implement minimal code (GREEN)
- [ ] Step 6: Verify spec passes
- [ ] Step 7: Refactor if needed
- [ ] Step 8: Verify specs still pass
```

## Step 1: Requirement Analysis

Before writing any code, understand:
- What is the expected input?
- What is the expected output/behavior?
- What are the edge cases?
- What errors should be handled?

Ask clarifying questions if requirements are ambiguous.

## Step 2: Choose Test Type

| Test Type | Use For | Location | Example |
|-----------|---------|----------|---------|
| Model spec | Validations, scopes, instance methods | `spec/models/` | Testing `User#full_name` |
| Request spec | API endpoints, HTTP responses | `spec/requests/` | Testing `POST /api/users` |
| System spec | Full user flows with JavaScript | `spec/system/` | Testing login flow |
| Service spec | Business logic, complex operations | `spec/services/` | Testing `CreateOrderService` |
| Job spec | Background job behavior | `spec/jobs/` | Testing `SendEmailJob` |

## Step 3: Write Failing Spec (RED)

### Spec Structure

```ruby
# frozen_string_literal: true

require 'rails_helper'

RSpec.describe ClassName, type: :spec_type do
  describe '#method_name' do
    subject { described_class.new(args) }

    context 'when condition is met' do
      let(:dependency) { create(:factory) }

      it 'behaves as expected' do
        expect(subject.method_name).to eq(expected_value)
      end
    end

    context 'when edge case' do
      it 'handles gracefully' do
        expect { subject.method_name }.to raise_error(SpecificError)
      end
    end
  end
end
```

### Good Spec Characteristics

- **One behavior per example**: Each `it` block tests one thing
- **Clear description**: Reads like a sentence when combined with `describe`/`context`
- **Minimal setup**: Only create data needed for the specific test
- **Fast execution**: Avoid unnecessary database hits, use `build` over `create` when possible
- **Independent**: Tests don't depend on order or shared state

### Templates

- See [unit_spec.erb](templates/unit_spec.erb) for model/service specs
- See [request_spec.erb](templates/request_spec.erb) for API specs

## Step 4: Verify Failure

Run the spec:
```bash
bundle exec rspec path/to/spec.rb --format documentation
```

The spec MUST fail with a clear message indicating:
- What was expected
- What was received (or that the method/class doesn't exist)
- Why it failed

**Important**: If the spec passes immediately, you're not doing TDD. Either:
- The behavior already exists (check if this is intentional)
- The spec is wrong (not testing what you think)

## Step 5: Implement (GREEN)

Write the MINIMUM code to pass:
- No optimization yet
- No edge case handling (unless that's what you're testing)
- No refactoring
- Just make it work

```ruby
# Start with the simplest thing that could work
def full_name
  "#{first_name} #{last_name}"
end
```

## Step 6: Verify Pass

Run the spec again:
```bash
bundle exec rspec path/to/spec.rb --format documentation
```

It MUST pass. If it fails:
1. Read the error carefully
2. Fix the implementation (not the spec, unless the spec was wrong)
3. Run again

## Step 7: Refactor

Now improve the code while keeping tests green:

### Refactoring Targets
- **Extract methods**: Long methods → smaller focused methods
- **Improve naming**: Unclear names → intention-revealing names
- **Remove duplication**: Repeated code → shared abstractions
- **Simplify logic**: Complex conditionals → cleaner patterns

### Refactoring Rules
1. Make ONE change at a time
2. Run specs after EACH change
3. If specs fail, undo and try different approach
4. Stop when code is clean (don't over-engineer)

## Step 8: Final Verification

Run all related specs:
```bash
bundle exec rspec spec/models/user_spec.rb
```

All specs must pass. If any fail:
- Undo recent changes
- Try a different refactoring approach
- Consider if the failing spec reveals a real bug

## Common Patterns

### Testing Validations

```ruby
describe 'validations' do
  it { is_expected.to validate_presence_of(:email) }
  it { is_expected.to validate_uniqueness_of(:email).case_insensitive }
  it { is_expected.to validate_length_of(:name).is_at_most(100) }
end
```

### Testing Associations

```ruby
describe 'associations' do
  it { is_expected.to belong_to(:organization) }
  it { is_expected.to have_many(:posts).dependent(:destroy) }
end
```

### Testing Scopes

```ruby
describe '.active' do
  let!(:active_user) { create(:user, status: :active) }
  let!(:inactive_user) { create(:user, status: :inactive) }

  it 'returns only active users' do
    expect(User.active).to contain_exactly(active_user)
  end
end
```

### Testing Service Objects

```ruby
describe '#call' do
  subject(:result) { described_class.new.call(params) }

  context 'with valid params' do
    let(:params) { { email: '[email protected]' } }

    it 'returns success' do
      expect(result).to be_success
    end

    it 'creates a user' do
      expect { result }.to change(User, :count).by(1)
    end
  end

  context 'with invalid params' do
    let(:params) { { email: '' } }

    it 'returns failure' do
      expect(result).to be_failure
    end
  end
end
```

## Anti-Patterns to Avoid

1. **Testing implementation, not behavior**: Test what it does, not how
2. **Too many assertions**: Split into separate examples
3. **Brittle tests**: Don't test exact error messages or timestamps
4. **Slow tests**: Use `build` over `create`, mock external services
5. **Mystery guests**: Make test data explicit, not hidden in factories

Overview

This skill guides you through the Test-Driven Development (TDD) Red-Green-Refactor cycle to implement features safely and iteratively. It helps you write a failing test first, implement the minimal code to satisfy it, then refactor with confidence while keeping the test suite green. The guidance is practical and checklist-driven for fast feedback loops.

How this skill works

The skill walks you through eight focused steps: analyze requirements, pick the right test type, write a failing spec (RED), confirm it fails, implement minimal code (GREEN), verify the spec passes, refactor incrementally, and re-run specs for final verification. It inspects test structure, encourages one-behavior-per-example, recommends where to place specs, and enforces running specs after each change to catch regressions early.

When to use it

  • Implementing a new feature and you want tests defined first
  • Fixing a bug while ensuring future regressions are prevented
  • Adding behavior that needs clear acceptance criteria
  • When you want a disciplined workflow for incremental design
  • Refactoring code but keeping behavior locked by tests

Best practices

  • Write one expectation per example and make descriptions read like a sentence
  • Prefer minimal setup; use build over create when possible to keep tests fast
  • Run the failing spec first to confirm the test is meaningful
  • Implement the smallest change to make the spec pass, then stop
  • Refactor one small improvement at a time and run specs after each change
  • Avoid testing implementation details; focus on observable behavior

Example use cases

  • Add a model method with unit specs: write a failing model spec, implement minimal method, refactor names and extract helpers
  • Implement an API endpoint: create a request spec that fails, implement controller/route, verify and clean up code
  • Introduce a service object: write failing service spec for expected result and side effects, implement call, then simplify internals
  • Hardening edge cases: write failing specs for error conditions, implement handling only for those cases, then run full suite

FAQ

What if the spec passes immediately?

If the spec passes, check whether the behavior already existed or the spec is incorrect. Update the spec to assert the intended behavior or confirm the existing implementation is acceptable.

How do I know when to stop refactoring?

Stop when code is clear, names are intention-revealing, duplication is reduced, and all related specs still pass. Avoid over-engineering; prefer small, safe changes.