home / skills / shotaiuchi / dotclaude / team-review

team-review skill

/dotclaude/skills/team-review

This skill analyzes the target to auto select reviewers, creates an agent team, and consolidates findings into a unified report.

npx playbooks add skill shotaiuchi/dotclaude --skill team-review

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
6.0 KB
---
name: team-review
description: Agent Teamsでターゲットに最適なコードレビューチームを自動構成・起動
argument-hint: "[--pr N | --issue N | --commit REF | --diff | --staged | --branch NAME | path | text]"
user-invocable: true
disable-model-invocation: true
---

# Review Team

Create an Agent Team with automatically selected reviewers based on the target type.

## Instructions

1. **Analyze the target** (PR, file, or directory) to determine the project type
2. **Select appropriate reviewers** based on the selection matrix below
3. **Create the agent team** with only the selected reviewers
4. Have them share findings and produce a consolidated report

## Step 0: Scope Detection

Parse `$ARGUMENTS` to determine the analysis target.
See `references/agent-team/scope-detection.md` for full detection rules.

| Flag | Scope | Action |
|------|-------|--------|
| `--pr <N>` | PR | `gh pr diff <N>` + `gh pr view <N> --json title,body,files` |
| `--issue <N>` | Issue | `gh issue view <N> --json title,body,comments` |
| `--commit <ref>` | Commit | `git show <ref>` or `git diff <range>` |
| `--diff` | Unstaged changes | `git diff` |
| `--staged` | Staged changes | `git diff --staged` |
| `--branch <name>` | Branch diff | `git diff main...<name>` |
| Path pattern | File/Directory | `Glob` + `Read` |
| Free text | Description | Use as context for analysis |
| (empty or ambiguous) | Unknown | Ask user to specify target |

## Step 1: Target Analysis

Before spawning any teammates, analyze the target to determine its type:

| Signal | Type |
|--------|------|
| `.kt`, `.java`, `build.gradle.kts`, `AndroidManifest.xml`, `android/` | Mobile (Android) |
| `.swift`, `.xcodeproj`, `.xib`, `.storyboard`, `ios/`, `Podfile` | Mobile (iOS) |
| `shared/`, `commonMain/`, `expect/actual`, KMP module structure | Mobile (KMP) |
| `.ts`, `.tsx`, `.jsx`, `.vue`, `.svelte`, `next.config`, `vite.config` | Web Frontend |
| `template.yaml`, `samconfig.toml`, Lambda handlers, API Gateway | Server (AWS SAM) |
| `Dockerfile`, `docker-compose`, REST/GraphQL handlers, `src/api/` | Server (General) |
| `setup.py`, `Cargo.toml`, `go.mod`, `package.json` (no UI) | CLI / Library |
| Mixed signals | Analyze dominant patterns and apply multiple types |

## Step 2: Reviewer Selection Matrix

| Reviewer | Mobile | Server/API | Web Frontend | CLI/Library |
|:---------|:------:|:----------:|:------------:|:-----------:|
| Security | Always | Always | Always | Always |
| Performance | Always | Always | Always | Always |
| Architecture | Always | Always | Always | Always |
| Test Coverage | Always | Always | Always | Always |
| Error Handling | Always | Always | Always | Always |
| Concurrency | Always | Always | If async-heavy | If multi-threaded |
| API Design | If consuming APIs | Always | If building APIs | If public API |
| Accessibility | Always | Skip | Always | Skip |
| Dependency | Always | Always | Always | Always |
| Observability | If analytics/crash reporting | Always | If logging present | If logging present |

### Selection Rules

- **Always**: Spawn this reviewer unconditionally
- **Skip**: Do not spawn this reviewer
- **Conditional**: Spawn only if the condition is met based on code analysis

When uncertain, **include the reviewer** (prefer thoroughness over efficiency).

## Step 3: Team Creation

Spawn only the selected reviewers using the **Task tool** (`subagent_type: "general-purpose"`).

**Execution Rules:**
- Send ALL Task tool calls in a **single message** for parallel execution
- Each subagent runs in its own context and returns findings to the lead (main context)
- Provide each subagent with the full target context (diff, file contents, etc.) in the prompt
- The lead (main context) is responsible for synthesis — do NOT spawn a subagent for synthesis

1. **Security Reviewer**: Review for vulnerabilities, authentication, authorization, input validation, injection attacks, CSRF, XSS, secrets exposure, and OWASP Top 10 issues.

2. **Performance Reviewer**: Review for N+1 queries, unnecessary re-renders, memory leaks, inefficient algorithms, database access patterns, caching, and resource optimization.

3. **Architecture Reviewer**: Review for design patterns, SOLID principles, layer separation, dependency direction, modularity, coupling, cohesion, and maintainability.

4. **Test Coverage Reviewer**: Review for missing unit tests, integration tests, edge cases, error handling paths, test quality, and test maintainability.

5. **Error Handling Reviewer**: Review for exception handling, error propagation, retry logic, fallback strategies, graceful degradation, and failure recovery paths.

6. **Concurrency Reviewer**: Review for thread safety, race conditions, deadlocks, shared mutable state, coroutine/async patterns, and synchronization issues.

7. **API Design Reviewer**: Review for REST/GraphQL API conventions, request/response design, versioning, backward compatibility, error responses, and documentation.

8. **Accessibility Reviewer**: Review for screen reader support, keyboard navigation, color contrast, WCAG compliance, semantic markup, ARIA labels, and inclusive design.

9. **Dependency Reviewer**: Review for known CVEs, license compliance, dependency size, transitive dependencies, supply chain security, and version management.

10. **Observability Reviewer**: Review for logging quality, monitoring coverage, metrics, distributed tracing, alerting, and production debugging capability.

## Workflow

1. Lead analyzes the target and announces selected reviewers with reasoning
2. Each selected reviewer reviews the target from their specialized perspective
3. Reviewers share critical findings with each other for cross-perspective validation
4. The lead synthesizes all findings into a consolidated report with priority ordering:
   - Critical / Blocker items first
   - High severity items
   - Medium severity items
   - Low severity / enhancements

## Output

The lead produces a final consolidated review report including:
- Target type detected and reviewers selected (with reasoning)
- Findings grouped by severity
- Actionable recommendations

Overview

This skill automatically composes and launches a focused code-review team tailored to a specified target (PR, commit, diff, file, or directory). It analyzes the target to detect project type, applies a reviewer-selection matrix, spawns the selected reviewers in parallel, and synthesizes a consolidated, prioritized report of findings and remediation steps.

How this skill works

The skill parses the provided target arguments to detect scope (PR, issue, commit, branch, staged/unstaged changes, or file paths) and inspects file extensions and project signals to classify the target type (Mobile, Web Frontend, Server/API, CLI/Library, or mixed). Based on a selection matrix and conditional rules, it selects which specialized reviewers to spawn (security, performance, architecture, etc.), launches them in parallel with full target context, collects their findings, and produces a prioritized consolidated report.

When to use it

  • You need a rapid, focused multi-perspective code review for a PR or commit.
  • Assess risk and quality before merging significant changes (new features, infra, or dependency updates).
  • Audit a directory or files that represent a specific subsystem (API, frontend, mobile).
  • Generate a structured review report for compliance, release readiness, or postmortem follow-up.

Best practices

  • Provide a precise target (PR number, commit ref, branch, or explicit file paths) to avoid ambiguity.
  • Include diffs or contextual notes for large or mixed-repo changes so conditional reviewers trigger correctly.
  • Prefer thoroughness: when in doubt, include conditional reviewers rather than skipping them.
  • Supply test artifacts, logs, or runtime config when available to improve observability and performance reviews.
  • Limit scope per run for large codebases—run separate teams per subsystem to keep findings actionable.

Example use cases

  • Pre-merge review of a PR that adds a new REST API and database migrations.
  • Security and dependency audit for a dependency upgrade across multiple modules.
  • Accessibility and frontend performance review for a UI refactor touching many components.
  • Mobile release checklist review (Android/iOS) including crash reporting and concurrency analysis.
  • Library/CLI release review focusing on API design, versioning, and test coverage.

FAQ

What targets can I pass to the skill?

You can pass a PR number, issue, commit ref or range, branch name, staged/unstaged diffs, file or directory paths, or free-text context. If ambiguous, the skill will ask you to clarify.

How are reviewers selected?

Selection uses a deterministic matrix: some reviewers run always, some are skipped for certain types, and some are conditional based on detected patterns (e.g., async code, public API surfaces). When uncertain, the skill errs on including reviewers.