home / skills / team-attention / plugins-for-claude-natives / agent-council

This skill collects opinions from multiple AI agents and synthesizes a single answer for informed, well-rounded guidance.

npx playbooks add skill team-attention/plugins-for-claude-natives --skill agent-council

Review the files below or copy the command above to add this skill to your agents.

Files (11)
SKILL.md
1013 B
---
name: agent-council
description: Collect and synthesize opinions from multiple AI agents. Use when users say "summon the council", "ask other AIs", or want multiple AI perspectives on a question.
---

# Agent Council

Collect multiple AI opinions and synthesize one answer.

## Usage

Run a job and collect results:

```bash
JOB_DIR=$(./skills/agent-council/scripts/council.sh start "your question here")
./skills/agent-council/scripts/council.sh wait "$JOB_DIR"
./skills/agent-council/scripts/council.sh results "$JOB_DIR"
./skills/agent-council/scripts/council.sh clean "$JOB_DIR"
```

One-shot:

```bash
./skills/agent-council/scripts/council.sh "your question here"
```

## References

- `references/overview.md` — workflow and background.
- `references/examples.md` — usage examples.
- `references/config.md` — member configuration.
- `references/requirements.md` — dependencies and CLI checks.
- `references/host-ui.md` — host UI checklist guidance.
- `references/safety.md` — safety notes.

Overview

This skill collects opinions from multiple AI agents and synthesizes them into a concise, actionable response. It is designed for situations where you want diverse perspectives, consensus checks, or a summarized judgment from several models. The tool runs agents in parallel, gathers outputs, and creates a synthesized answer with optional provenance and member breakdowns.

How this skill works

You submit a single question or job and the skill dispatches it to a configurable set of member agents. Each agent returns its response and metadata; the skill then analyzes those responses to identify agreement, divergence, and supporting reasons. Finally, it produces a synthesized summary that highlights consensus points, disagreements, and recommended next steps. Jobs can be run interactively (one-shot) or as tracked jobs with results and cleanup steps.

When to use it

  • When you want multiple independent AI perspectives on a single question.
  • To validate an answer by checking for consensus across models.
  • When complex judgments benefit from seeing supporting arguments and dissenting views.
  • To generate a synthesized recommendation for stakeholders from diverse agent inputs.
  • During research, design reviews, or risk assessments where multiple viewpoints reduce bias.

Best practices

  • Configure a diverse set of member agents to maximize varied viewpoints.
  • Provide clear, specific prompts so members address the same scope and constraints.
  • Use job mode for longer tasks to capture provenance and compare outputs reliably.
  • Review member metadata to understand which agents influenced the synthesis.
  • Limit the number of simultaneous members to balance depth of responses and cost.

Example use cases

  • Summon the council to get product feature trade-off recommendations from several models.
  • Ask other AIs to critique a marketing message and synthesize improvements.
  • Collect safety and risk assessments from multiple agents before releasing a policy.
  • Use as a consensus check when a single model returns uncertain or conflicting advice.
  • Run periodic audits of model behavior by comparing outputs across member set.

FAQ

Can I customize which agents participate?

Yes. Members are configurable so you can include specific models, settings, or roles to shape the range of perspectives.

Does the tool show how the final synthesis was derived?

The skill provides provenance and a breakdown of member responses so you can see agreement, disagreements, and the supporting reasons behind the synthesis.