home / skills / nickcrew / claude-cortex / subagent-driven-development

subagent-driven-development skill

/skills/subagent-driven-development

This skill accelerates development by dispatching fresh subagents for each task with inter-task reviews to ensure fast, high-quality iteration.

This is most likely a fork of the subagent-driven-development_obra skill from jackspace
npx playbooks add skill nickcrew/claude-cortex --skill subagent-driven-development

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
5.0 KB
---
name: subagent-driven-development
description: Use when executing implementation plans with independent tasks in the current session - dispatches fresh subagent for each task with code review between tasks, enabling fast iteration with quality gates
---

# Subagent-Driven Development

Execute plan by dispatching fresh subagent per task, with code review after each.

**Core principle:** Fresh subagent per task + review between tasks = high quality, fast iteration

## Overview

**vs. Executing Plans (parallel session):**
- Same session (no context switch)
- Fresh subagent per task (no context pollution)
- Code review after each task (catch issues early)
- Faster iteration (no human-in-loop between tasks)

**When to use:**
- Staying in this session
- Tasks are mostly independent
- Want continuous progress with quality gates

**When NOT to use:**
- Need to review plan first (use executing-plans)
- Tasks are tightly coupled (manual execution better)
- Plan needs revision (brainstorm first)

## The Process

### 1. Load Plan

Read plan file, create TodoWrite with all tasks.

### 2. Execute Task with Subagent

For each task:

**Dispatch fresh subagent:**
```
Task tool (general-purpose):
  description: "Implement Task N: [task name]"
  prompt: |
    You are implementing Task N from [plan-file].

    Read that task carefully. Your job is to:
    1. Implement exactly what the task specifies
    2. Write tests (following TDD if task says to)
    3. Verify implementation works
    4. Commit your work
    5. Report back

    Work from: [directory]

    Report: What you implemented, what you tested, test results, files changed, any issues
```

**Subagent reports back** with summary of work.

### 3. Review Subagent's Work

**Dispatch code-reviewer subagent:**
```
Task tool (superpowers:code-reviewer):
  Use template at requesting-code-review/code-reviewer.md

  WHAT_WAS_IMPLEMENTED: [from subagent's report]
  PLAN_OR_REQUIREMENTS: Task N from [plan-file]
  BASE_SHA: [commit before task]
  HEAD_SHA: [current commit]
  DESCRIPTION: [task summary]
```

**Code reviewer returns:** Strengths, Issues (Critical/Important/Minor), Assessment

### 4. Apply Review Feedback

**If issues found:**
- Fix Critical issues immediately
- Fix Important issues before next task
- Note Minor issues

**Dispatch follow-up subagent if needed:**
```
"Fix issues from code review: [list issues]"
```

### 5. Mark Complete, Next Task

- Mark task as completed in TodoWrite
- Move to next task
- Repeat steps 2-5

### 6. Final Review

After all tasks complete, dispatch final code-reviewer:
- Reviews entire implementation
- Checks all plan requirements met
- Validates overall architecture

### 7. Complete Development

After final review passes:
- Announce: "I'm using the finishing-a-development-branch skill to complete this work."
- **REQUIRED SUB-SKILL:** Use superpowers:finishing-a-development-branch
- Follow that skill to verify tests, present options, execute choice

## Example Workflow

```
You: I'm using Subagent-Driven Development to execute this plan.

[Load plan, create TodoWrite]

Task 1: Hook installation script

[Dispatch implementation subagent]
Subagent: Implemented install-hook with tests, 5/5 passing

[Get git SHAs, dispatch code-reviewer]
Reviewer: Strengths: Good test coverage. Issues: None. Ready.

[Mark Task 1 complete]

Task 2: Recovery modes

[Dispatch implementation subagent]
Subagent: Added verify/repair, 8/8 tests passing

[Dispatch code-reviewer]
Reviewer: Strengths: Solid. Issues (Important): Missing progress reporting

[Dispatch fix subagent]
Fix subagent: Added progress every 100 conversations

[Verify fix, mark Task 2 complete]

...

[After all tasks]
[Dispatch final code-reviewer]
Final reviewer: All requirements met, ready to merge

Done!
```

## Advantages

**vs. Manual execution:**
- Subagents follow TDD naturally
- Fresh context per task (no confusion)
- Parallel-safe (subagents don't interfere)

**vs. Executing Plans:**
- Same session (no handoff)
- Continuous progress (no waiting)
- Review checkpoints automatic

**Cost:**
- More subagent invocations
- But catches issues early (cheaper than debugging later)

## Red Flags

**Never:**
- Skip code review between tasks
- Proceed with unfixed Critical issues
- Dispatch multiple implementation subagents in parallel (conflicts)
- Implement without reading plan task

**If subagent fails task:**
- Dispatch fix subagent with specific instructions
- Don't try to fix manually (context pollution)

## Integration

**Required workflow skills:**
- **writing-plans** - REQUIRED: Creates the plan that this skill executes
- **requesting-code-review** - REQUIRED: Review after each task (see Step 3)
- **finishing-a-development-branch** - REQUIRED: Complete development after all tasks (see Step 7)

**Subagents must use:**
- **test-driven-development** - Subagents follow TDD for each task

**Alternative workflow:**
- **executing-plans** - Use for parallel session instead of same-session execution

See code-reviewer template: requesting-code-review/code-reviewer.md

Overview

This skill runs implementation plans by dispatching a fresh subagent for each independent task, with an automated code review gate after every task. It enables fast, same-session iteration while preventing context pollution and catching issues early through continuous reviews.

How this skill works

Load a plan into a task list, then for each task spawn a fresh implementation subagent that implements the task, writes and runs tests, commits changes, and reports results. After each implementation, spawn a code-reviewer subagent to analyze strengths, list Critical/Important/Minor issues, and recommend fixes. Apply fixes immediately for Critical issues, schedule Important fixes before the next task, and iterate until the review passes. Finish with a final full-project review and use a finishing step to validate and complete the development branch.

When to use it

  • Executing an implementation plan while staying in the same session
  • Tasks are mostly independent and can be implemented separately
  • You want continuous progress without manual handoffs between tasks
  • You require quality gates between tasks to catch defects early
  • You prefer TDD-driven subagents for each change

Best practices

  • Keep tasks small and independent to avoid merge conflicts
  • Enforce TDD in subagents: tests first, then implementation
  • Always run the code-reviewer after every task and treat Critical issues as blockers
  • Use single implementation subagent per task (no parallel implementation of interdependent tasks)
  • Record BASE_SHA and HEAD_SHA for reliable code-review diffs and traceability

Example use cases

  • Incrementally implement features in a CLI app where each command is a separate task
  • Add a series of unit-tested integrations to a backend service, one integration per task
  • Refactor modules one at a time with tests and review after each refactor
  • Execute a migration plan by applying small, reviewed schema and code changes sequentially
  • Complete a bug-fix backlog where each bug is addressed by an isolated subagent

FAQ

What if a subagent fails to implement a task correctly?

Dispatch a follow-up fix subagent with the code-review issues as instructions. Treat Critical issues as immediate blockers and iterate until tests and reviews pass.

Can I run multiple implementation subagents in parallel?

No. Avoid parallel implementation for tasks that may touch the same files to prevent conflicts and context pollution.