home / skills / d-oit / do-novelist-ai / agent-coordination

agent-coordination skill

/.claude/skills/agent-coordination

This skill coordinates multiple agents to orchestrate parallel, sequential, swarm, or iterative workflows across languages, boosting development speed and

npx playbooks add skill d-oit/do-novelist-ai --skill agent-coordination

Review the files below or copy the command above to add this skill to your agents.

Files (6)
SKILL.md
2.7 KB
---
name: agent-coordination
description: Coordinate multiple agents for software development across any language. Use for parallel execution of independent tasks, sequential chains with dependencies, swarm analysis from multiple perspectives, or iterative refinement loops. Handles Python, JavaScript, Java, Go, Rust, C#, and other languages.
---

# Agent Coordination

Coordinate multiple agents efficiently for complex development tasks across any programming language.

## Quick Start

Choose your coordination strategy:

**Parallel** - Independent tasks → See [PARALLEL.md](PARALLEL.md)
**Sequential** - Dependent tasks → See [SEQUENTIAL.md](SEQUENTIAL.md)  
**Swarm** - Multi-perspective analysis → See [SWARM.md](SWARM.md)
**Hybrid** - Multi-phase workflows → See [HYBRID.md](HYBRID.md)
**Iterative** - Progressive refinement → See [ITERATIVE.md](ITERATIVE.md)

## Available Agents

| Agent | Best For |
|-------|----------|
| code-reviewer | Quality assessment, standards |
| test-runner | Execute tests, verify functionality |
| feature-implementer | Build new capabilities |
| refactorer | Improve existing code |
| debugger | Diagnose and fix issues |
| security-auditor | Find vulnerabilities |
| performance-optimizer | Speed and efficiency |
| loop-agent | Orchestrate iterations |

## Basic Workflow

1. **Choose strategy** based on task structure
2. **Select agents** matching required capabilities
3. **Execute** with quality gates between phases
4. **Validate** outputs before proceeding
5. **Synthesize** results

## Language Support

This coordination skill works with:
- Python (Django, Flask, FastAPI)
- JavaScript/TypeScript (Node.js, React, Vue)
- Java (Spring, Jakarta EE)
- Go (Gin, Echo)
- Rust (Actix, Rocket)
- C# (.NET, ASP.NET Core)

## Common Patterns

**Analysis + Execution**:
```
1. Swarm analysis (parallel agents gather insights)
2. Sequential execution (apply findings)
3. Parallel validation (verify results)
```

**Test-Driven Workflow**:
```
1. test-runner: Run existing tests
2. feature-implementer: Add functionality
3. test-runner: Verify implementation
4. code-reviewer: Quality check
```

**Performance Optimization**:
```
Loop with performance-optimizer until:
- Metrics meet targets
- No more optimizations found
- Max iterations reached
```

## Quality Gates

Between each phase, verify:
- Code compiles/parses correctly
- Tests pass with adequate coverage
- Security scans clean
- Performance acceptable
- No regressions introduced

## Next Steps

Read the specific coordination pattern that matches your task structure. Each pattern includes detailed workflows, examples, and quality criteria.

Overview

This skill coordinates multiple agents to execute software development workflows across any programming language. It supports parallel, sequential, swarm, hybrid, and iterative coordination strategies to match task structure and complexity. Use it to split work, manage dependencies, and ensure quality gates between phases.

How this skill works

You pick a coordination strategy and assign specialized agents (code-reviewer, test-runner, feature-implementer, refactorer, debugger, security-auditor, performance-optimizer, loop-agent) to tasks. The system runs agents in parallel or sequence as configured, enforces quality gates between phases, and synthesizes results into a final deliverable. It supports languages and frameworks like Python, JavaScript/TypeScript, Java, Go, Rust, and C#.

When to use it

  • Parallelize independent tasks like linting, static analysis, or multi-module builds
  • Chain dependent steps such as design → implement → validate
  • Gather diverse perspectives with swarm analysis for architecture or security reviews
  • Run iterative refinement loops for performance tuning or progressive feature polish
  • Coordinate hybrid workflows mixing parallel analysis and sequential execution

Best practices

  • Choose the simplest coordination pattern that fits the task to avoid unnecessary complexity
  • Map each responsibility to the most suitable agent and limit agent scope
  • Define clear quality gates (compile, tests, security, performance) before advancing phases
  • Validate outputs between phases and synthesize findings to avoid duplicated effort
  • Set iteration limits and stop criteria for iterative or loop-based workflows

Example use cases

  • Implement a new feature: feature-implementer writes code, test-runner verifies, code-reviewer reviews, refactorer cleans up
  • Security audit swarm: multiple security-auditor agents probe from different angles, then a sequential patch-and-verify step
  • Performance tuning loop: performance-optimizer iterates until targets or max iterations reached, with test-runner and benchmarks validating progress
  • Multi-language microservice release: parallel agents build and test each service, then a sequential integration and deployment phase
  • Refactor large codebase: refactorer runs in parallel across modules, followed by centralized testing and review

FAQ

How do I pick a coordination strategy?

Match the strategy to task structure: parallel for independent work, sequential for dependencies, swarm for varied perspectives, iterative for refinement, or hybrid for multi-phase workflows.

What quality gates should I enforce?

At minimum ensure code compiles/parses, tests pass with required coverage, security scans are clean, and performance meets targets before proceeding.