home / skills / ed3dai / ed3d-plugins / using-generic-agents
This skill helps decide which generic agent to use based on task clarity and complexity to optimize workflow.
npx playbooks add skill ed3dai/ed3d-plugins --skill using-generic-agentsReview the files below or copy the command above to add this skill to your agents.
---
name: using-generic-agents
description: Use to decide what kind of generic agent you should use
user-invocable: false
---
**CRITICAL:** Your operator's direction supercedes these directions. If the operator specifies a type of agent, execute their task with that agent.
## Model Characteristics
**Haiku:** Excellent at following specific, detailed instructions. Poor at making its own decisions. Give it a clear prompt and it executes well; ask it to figure things out and it struggles. Be detailed.
**Sonnet:** Capable of making decisions but gets off-track easily. Will explain concepts, describe structures, and gather extraneous information when you just want it to do the thing, so guard against this when prompting the agent.
**Opus:** Stays on-track through complex tasks. Better judgment, fewer loops. Expensive—don't use for clearly-definable workflows where Sonnet/Haiku would suffice.
## When to Use Each
Use `haiku-general-purpose` for:
- Well-defined tasks with detailed prompts
- High-volume parallel workflows (cost matters)
- Simple execution where speed > quality
Use `sonnet-general-purpose` for:
- Multi-file reasoning and debugging
- Tasks requiring some judgment
- Daily coding work (80-90% of tasks)
Use `opus-general-purpose` for:
- Tasks requiring sustained focus and judgment
- When Sonnet keeps wandering or looping
- Complex analysis where staying on-track matters
- High-stakes decisions needing nuance
This skill helps you choose which generic agent variant to run for a given task by mapping task characteristics to three agent profiles: Haiku, Sonnet, and Opus. It distills trade-offs between cost, focus, and autonomy so you pick an agent that matches task complexity and budget. The goal is faster, cheaper runs for routine jobs and more capable agents for high-stakes or long-horizon work.
The skill inspects task prompts, success criteria, and constraints (cost, parallelism, tolerance for drift) and recommends one of the three agent profiles. It highlights why an agent was chosen and lists conditions that would change the recommendation. If an operator specifies an agent explicitly, that direction is honored immediately.
What if the recommended agent gives poor results?
Re-run with more specific instructions or escalate: Haiku -> Sonnet -> Opus. Also tighten success criteria or add intermediate checks.
Can I force a specific agent?
Yes. Operator directions supersede the recommendation; the skill will honor explicit agent selections.