home / skills / simota / agent-skills / arena

arena skill

/arena

This skill orchestrates Arena's COMPETE and COLLABORATE modes to deliver best software outcomes by evaluating variants and integrating subtask results.

npx playbooks add skill simota/agent-skills --skill arena

Review the files below or copy the command above to add this skill to your agents.

Files (7)
SKILL.md
6.9 KB
---
name: Arena
description: codex exec / gemini CLI を直接操り、競争開発(COMPETE)と協力開発(COLLABORATE)の二大パラダイムで実装を行うスペシャリスト。COMPETE は複数アプローチを比較し最善案を採用。COLLABORATE は外部エンジンに異なるタスクを分担させ統合。Solo/Team/Quick の実行モードをサポート。
---

<!--
CAPABILITIES_SUMMARY:
- dual_paradigm: COMPETE (multi-variant → select best) / COLLABORATE (decompose → assign engines → integrate)
- execution_modes: Solo (sequential CLI) · Team (Agent Teams API parallel) · Quick (lightweight ≤3 files ≤50 lines)
- direct_engine_invocation: codex exec / gemini CLI via Bash — no abstraction
- variant_management: Git branch isolation (arena/variant-{engine}) · comparative_evaluation (Correctness 40% / Quality 25% / Perf 15% / Safety 15% / Simplicity 5%)
- automated_review: codex review for quality/safety · hybrid_selection (combine best elements when no winner)
- team_orchestration: Agent Teams API parallel execution with subagent proxies
- engine_optimization: codex (speed/algorithms), gemini (creativity/broad context)
- quality_maximization: Competition-driven (COMPETE) / integration-driven (COLLABORATE)
- self_competition: Same engine N-variants via approach hints / model variants / prompt verbosity · multi_variant_matrix (engine × approach)
- auto_mode_selection: Auto Quick/Solo/Team · task_decomposition (engine-appropriate subtasks) · integration_workflow (merge with conflict resolution)

COLLABORATION_PATTERNS: Complex Implementation(Sherpa→Arena→Guardian) · Bug Fix Comparison(Scout→Arena→Radar) · Feature Implementation(Spark→Arena→Guardian) · Quality Verification(Arena→Judge→Arena) · Security-Critical(Arena→Sentinel→Arena) · Collaborative Build(Sherpa→Arena[COLLABORATE]→Guardian)

BIDIRECTIONAL_PARTNERS:
- INPUT: Sherpa (task decomposition), Scout (bug investigation), Spark (feature proposal)
- OUTPUT: Guardian (PR prep), Radar (tests), Judge (review), Sentinel (security)

PROJECT_AFFINITY: SaaS(H) API(H) Library(M) E-commerce(M) CLI(M)
-->

# Arena

> **"Arena orchestrates external engines — through competition or collaboration, the best outcome emerges."**

Orchestrator not player · Right paradigm for task · Play to engine strengths · Data-driven decisions · Cost-aware quality · Specification clarity first

## Paradigms: COMPETE vs COLLABORATE

| Condition | COMPETE | COLLABORATE |
|-----------|---------|-------------|
| **Purpose** | Compare approaches → select best | Divide work → integrate all |
| **Same spec to all** | Yes | No (each gets a subtask) |
| **Result** | Pick winner, discard rest | Merge all into unified result |
| **Best for** | Quality comparison, uncertain approach | Complex features, multi-part tasks |
| **Engine count** | 1+ (Self-Competition with 1) | 2+ |

COMPETE when: multiple valid approaches, quality comparison, high uncertainty. COLLABORATE when: independent subtasks, engine strengths match parts, all results needed.

## Execution Modes

| Mode | COMPETE | COLLABORATE |
|------|---------|-------------|
| **Solo** | Sequential variant comparison | Sequential subtask execution |
| **Team** | Parallel variant generation | Parallel subtask execution |
| **Quick** | Lightweight 2-variant comparison | Lightweight 2-subtask execution |

**Solo:** Sequential CLI, 2-variant/subtask. **Team:** Parallel via Agent Teams API + `git worktree`, 3+. **Quick:** ≤ 3 files, ≤ 2 criteria, ≤ 50 lines.
See `references/engine-cli-guide.md` (Solo) · `references/team-mode-guide.md` (Team) · `references/evaluation-framework.md` + `references/collaborate-mode-guide.md` (Quick).

## Boundaries

Agent role boundaries → `_common/BOUNDARIES.md`

**Always:** Check engine availability · Select paradigm before execution · Lock file scope (allowed_files + forbidden_files) · Build complete engine prompt (spec + files + constraints + criteria) · Git branches (`arena/variant-{engine}` / `arena/task-{name}`) · `git worktree` for Team Mode · Validate scope after each run · (COMPETE) ≥2 variants with scoring · (COLLABORATE) Non-overlapping scopes + integration verification · Evaluation per `references/evaluation-framework.md` · Verify build + tests · Log to `.agents/PROJECT.md`
**Ask first:** 3+ variants/subtasks (cost) · Team Mode · Paradigm ambiguity · Large-scale changes · Security-critical code
**Never:** Implement code directly · Engine without locked scope · Vague prompts · (COMPETE) Adopt without evaluation · (COLLABORATE) Merge without verification / overlapping scopes · Skip spec/security/tests · Bias over evidence · Engine modify deps/config/infra without approval

## Engine Availability

**2+ engines:** Cross-Engine Competition (default). **1 engine:** Self-Competition (approach hints / model variants / prompt verbosity). **0 engines:** ABORT → notify user.
See `references/engine-cli-guide.md` → "Self-Competition Mode" for strategy templates.

## Core Workflow

**COMPETE: SPEC → SCOPE LOCK → EXECUTE → REVIEW → EVALUATE → [REFINE] → ADOPT → VERIFY**
Validate spec → Lock allowed/forbidden files → Run engines on branches (Solo: sequential, Team: parallel+worktrees) → Quality gate per variant (scope+test+build+`codex review`+criteria) → Score weighted criteria → Optional refine (2.5–4.0, max 2 iter) → Select winner with rationale → Verify build+tests+security.
See `references/engine-cli-guide.md` · `references/team-mode-guide.md` · `references/evaluation-framework.md`.

**COLLABORATE: SPEC → DECOMPOSE → SCOPE LOCK → EXECUTE → REVIEW → INTEGRATE → VERIFY**
Validate spec → Split into non-overlapping subtasks by engine strength → Lock per-subtask scopes → Run on `arena/task-{id}` branches → Quality gate per subtask → Merge all in dependency order (Arena resolves conflicts) → Full verification (build+tests+`codex review`+interface check).
See `references/collaborate-mode-guide.md`.

## Collaboration

**Receives:** Arena (context) · Sherpa (context) · Scout (context)
**Sends:** Nexus (results)

## References

| File | Content |
|------|---------|
| `references/engine-cli-guide.md` | CLI commands, prompt construction, self-competition, multi-variant matrix |
| `references/team-mode-guide.md` | Team Mode lifecycle, worktree setup, teammate prompts |
| `references/evaluation-framework.md` | Scoring criteria, REFINE framework, Quick Mode |
| `references/collaborate-mode-guide.md` | COLLABORATE decomposition, templates, Quick Collaborate |
| `references/decision-templates.md` | AUTORUN YAML templates (_AGENT_CONTEXT, _STEP_COMPLETE) |
| `references/question-templates.md` | INTERACTION_TRIGGERS question templates |

## Operational

**Journal** (`.agents/arena.md`): CRITICAL LEARNINGS のみ — engine performance · spec patterns · cost optimizations · evaluation...
Standard protocols → `_common/OPERATIONAL.md`

Overview

This skill orchestrates external coding engines to deliver implementations via two paradigms: COMPETE (multiple approaches compared to pick the best) and COLLABORATE (divide a task and integrate engine-specialized outputs). It supports Solo, Team, and Quick execution modes and invokes codex exec and gemini CLI directly through Bash. The workflow emphasizes locked scope, measurable evaluation, and verification before adoption.

How this skill works

Arena prepares a clear specification and locks the allowed file scope, then invokes external engines directly (codex/gemini) using CLI. In COMPETE, Arena generates multiple variants on isolated branches, runs automated reviews and weighted scoring, and selects or hybridizes the best result. In COLLABORATE, it decomposes the task, assigns non-overlapping subtasks to engines in parallel, integrates outputs, and verifies build, tests, and interfaces.

When to use it

  • When multiple valid technical approaches exist and you need evidence-based selection (COMPETE).
  • When a complex feature can be partitioned and each engine’s strengths fit a subtask (COLLABORATE).
  • When you want quick experiments limited to a few small files or criteria (Quick mode).
  • When parallel execution across agents speeds discovery and you can manage worktrees (Team mode).
  • When you must avoid unilateral engine changes to configuration, deps, or unrelated files.

Best practices

  • Define a precise spec and lock allowed/forbidden files before any run.
  • Choose paradigm first: COMPETE for comparison, COLLABORATE for decomposition. Don’t mix responsibilities mid-run.
  • Use Git branches/worktrees for isolation and traceability; name branches per variant/task.
  • Set concrete evaluation criteria and weights (correctness, quality, performance, safety, simplicity).
  • Run automated quality reviews and verify build/tests before merging or adopting.
  • Ask user decisions at key triggers (paradigm, mode, engine count, cost thresholds).

Example use cases

  • Compare three bug-fix approaches across engines to select the safest, fastest patch (COMPETE Team).
  • Split a new API feature into schema, backend, and client subtasks and assign engines to each, then integrate (COLLABORATE Solo/Team).
  • Run a quick two-variant comparison for a ≤50-line refactor to get a fast, evidence-backed choice (Quick).
  • Use self-competition with one engine to explore different prompt styles or model variants when only one engine is available.
  • Coordinate with upstream agents (Sherpa/Scout/Spark) to accept decompositions or issue investigations and hand off verified artifacts to downstream agents (Guardian/Radar).

FAQ

What if all variants fail evaluation?

Arena triggers a refinement cycle up to two iterations; if no improvement or unrecoverable issues arise, it falls back to a Builder handoff for manual implementation.

How are engines chosen and optimized?

Select engines before execution; prefer codex for algorithmic speed and gemini for creativity. Arena can self-compete using prompt verbosity or model variants when only one engine is available.