home / skills / igorwarzocha / opencode-workflows / agent-architect

agent-architect skill

/cowork/.opencode/skill/agent-architect

npx playbooks add skill igorwarzocha/opencode-workflows --skill agent-architect

Review the files below or copy the command above to add this skill to your agents.

Files (3)
SKILL.md
11.5 KB
---
name: agent-architect
description: |-
  Create and refine OpenCode agents via guided Q&A. Use proactively for agent creation, performance improvement, or configuration design.
  
  Examples:
  - user: "Create an agent for code reviews" → ask about scope, permissions, tools, model preferences, generate AGENTS.md frontmatter
  - user: "My agent ignores context" → analyze description clarity, allowed-tools, permissions, suggest improvements
  - user: "Add a database expert agent" → gather requirements, set convex-database-expert in subagent_type, configure permissions
  - user: "Make my agent faster" → suggest smaller models, reduce allowed-tools, tighten permissions
---

# Agent Architect

Create and refine opencode agents through a guided Q&A process.

<core_approach>

**Agent creation is conversational, not transactional.**

- MUST NOT assume what the user wants—ask
- SHOULD start with broad questions, drill into details only if needed
- Users MAY skip configuration they don't care about
- MUST always show drafts and iterate based on feedback

The goal is to help users create agents that fit their needs, not to dump every possible configuration option on them.

</core_approach>

<question_tool>

**Batching:** Use the `question` tool for 2+ related questions. Single questions → plain text.

**Syntax:** `header` ≤12 chars, `label` 1-5 words, add "(Recommended)" to default.

**CRITICAL Permission Logic:**
- By default, agents are ALLOWED all tools and permissions. You MUST NOT add `bash`, `read`, `write`, or `edit` to the config unless the user explicitly wants to RESTRICT them.
- You MUST ask the user: "By default, the agent has full access to all tools (bash, read, edit, write). Would you like to restrict any of these?"
- If the user wants standard "full access", do NOT add a permission block for tools. Rely on system defaults.
- **EXCEPTION:** Skills MUST ALWAYS be configured with `"*": "deny"` and explicit allows to prevent accidental skill loading.

</question_tool>

<reference>

## Agent Locations

| Scope | Path |
|-------|------|
| Project | `.opencode/agent/<name>.md` |
| Global | `~/.config/opencode/agent/<name>.md` |

## Agent File Format

```yaml
---
description: When to use this agent. Include trigger examples.
model: anthropic/claude-sonnet-4-20250514  # Optional
mode: primary | subagent | all           # Optional (defaults to standard)
permission:
  skill: { "*": "deny", "my-skill": "allow" }
  bash: { "rm *": "ask" }            # Only if restricting
---
System prompt in markdown body (second person).
```

**Full schema:** See `references/opencode-config.md`

## Agent Modes

| Mode | Description |
|------|-------------|
| `primary` | Core agent, visible in main selection menus. |
| `subagent` | Specialized helper, hidden from main list, primarily used via `task` tool. |
| `all` | Dual-purpose agent, visible in both main menus and routing. |
| `(undefined)`| Standard agent, visible to tools and users. |

</reference>

<workflow>

## Phase 1: Core Purpose (Required)

Ask these first—they shape everything else:

1. **"What should this agent do?"**
   - Get the core task/domain
   - Examples: "review code", "help with deployments", "research topics"

2. **"What should trigger this agent?"**
   - Specific phrases, contexts, file types
   - Becomes the `description` field

3. **"What expertise/persona should it have?"**
   - Tone, boundaries, specialization
   - Shapes the system prompt

## Phase 1.5: Research the Domain

**MUST NOT assume knowledge is current.** After understanding the broad strokes:

- Search for current best practices in the domain
- Check for updates to frameworks, tools, or APIs the agent will work with
- Look up documentation for any unfamiliar technologies mentioned
- Find examples of how experts approach similar tasks

This research informs better questions in Phase 2 and produces a more capable agent.

**Example:** User wants an agent for "Next.js deployments" → Research current Next.js deployment patterns, Vercel vs self-hosted, App Router vs Pages Router, common pitfalls, etc.

## Phase 2: Capabilities (Ask broadly, then drill down)

4. **"Do you want to RESTRICT any permissions or tools?"** (Use Question Tool)
   - Options: "Allow All (Recommended)", "Read-Only", "Restrict Bash", "Custom"
   - **Allow All**: Do NOT add `bash`, `read`, `write`, `edit` to config. Rely on defaults.
   - **Read-Only**: Explicitly deny write/edit/bash.
   - **Restrict Bash**: Set bash to `ask` or `deny` for specific patterns.
   - **Custom**: Ask specific follow-ups.

5. **"Should this agent use any skills?"**
   - If yes: "Which ones?"
   - ALWAYS configure `permission.skill` with `"*": "deny"` and explicit allows.
   - This applies even if other permissions are standard.

6. **"What mode should this agent use?"**
   - Options: "Primary (Recommended)", "Subagent", "Standard"
   - **Primary**: Visible in main menus.
   - **Subagent**: Hidden, for background/task usage.
   - **Standard**: Visible to tools/users.

## Phase 3: Details (Optional—user MAY skip)

7. **"Any specific model preference?"** (most users skip)
8. **"Custom temperature/sampling?"** (most users skip)
9. **"Maximum steps before stopping?"** (most users skip)

## Phase 4: Review & Refine

10. **Show the draft config and prompt, ask for feedback**
    - "Here's what I've created. Anything you'd like to change?"
    - Iterate until user is satisfied

**Key principle:** Start broad, get specific only where the user shows interest. MUST NOT overwhelm with options like `top_p` unless asked.

**Be flexible:** If the user provides lots of info upfront, adapt—MUST NOT rigidly follow the phases. If they say "I want a code review agent that can't run shell commands", you already have answers to multiple questions.

</workflow>

<system_prompt_structure>

## Recommended Structure

 ```markdown
# Role and Objective
[Agent purpose and scope]

# Instructions
- Core behavioral rules
- What to always/never do

## Sub-instructions (optional)
More detailed guidance for specific areas.

# Workflow
1. First, [step]
2. Then, [step]
3. Finally, [step]

# Output Format
Specify exact format expected.

# Examples (optional)
<examples>
<example>
<input>User request</input>
<output>Expected response</output>
</example>
</examples>
```

## XML Tags (Recommended)

XML tags improve clarity and parseability across all models:

| Tag | Purpose |
|-----|---------|
| `<instructions>` | Core behavioral rules |
| `<context>` | Background information |
| `<examples>` | Few-shot demonstrations |
| `<thinking>` | Chain-of-thought reasoning |
| `<output>` | Final response format |

**Best practices:**
- Be consistent with tag names throughout
- Nest tags for hierarchy: `<outer><inner></inner></outer>`
- Reference tags in instructions: "Using the data in `<context>` tags..."

**Example:**
```xml
<instructions>
1. Analyze the code in <code> tags
2. List issues in <findings> tags
3. Suggest fixes in <recommendations> tags
</instructions>
```

## Description Field (Critical)

The `description` determines when the agent triggers.

**Primary Agents**: Keep it extremely concise (PRECISELY 3 words). The user selects these manually or via very clear intent.
**Any Other Agents**: Must be specific and exhaustive to ensure correct routing by the task tool.
**Template (Any Other Agents)**: `[Role/Action]. Use when [triggers]. Examples: - user: "trigger" -> action`

**Good (Primary)**:
```
Code review expert.
```

**Good (Any Other Agents)**:
```
Code review specialist. Use when user says "review this PR", "check my code", 
"find bugs".

Examples:
- user: "review" -> check code
- user: "scan" -> check code
```

## Prompt Altitude

Find the balance between too rigid and too vague:

| ❌ Too Rigid | ✅ Right Altitude | ❌ Too Vague |
|-------------|-------------------|-------------|
| Hardcoded if-else logic | Clear heuristics + flexibility | "Be helpful" |
| "If X then always Y" | "Generally prefer X, but use judgment" | No guidance |

</system_prompt_structure>

<agentic_components>

For agents that use tools in a loop, SHOULD include these reminders:

```markdown
# Persistence
Keep working until the user's request is fully resolved. Only yield 
control when you're confident the task is complete.

# Tool Usage  
If unsure about something, use tools to gather information. 
Do NOT guess or make up answers.

# Planning (optional)
Think step-by-step before each action. Reflect on results before 
proceeding.
```

</agentic_components>

<permissions>

Control what agents can access.

**CRITICAL: Avoid Overengineering**
- Do NOT list permissions for standard tools (`read`, `write`, `edit`, `bash`) unless the user explicitly asks for restrictions or non-standard access.
- Rely on system defaults for most agents.
- **Skills are the exception**: You MUST always configure `permission.skill` to whitelist specific skills and deny others.

```yaml
# Standard Agent (minimal config)
permission:
  skill:
    "*": "deny"
    "my-skill": "allow"

# Restricted Agent (explicit config)
permission:
  edit: "ask"
  bash:
    "*": "deny"
  skill:
    "*": "deny"
```

**Full reference:** See `references/opencode-config.md`

## Legacy Configuration

Agents may occasionally work on legacy projects using outdated frontmatter (e.g., `tools:`, `maxSteps:`). You MUST correct these to the modern `permission:` and `steps:` fields when encountered.

</permissions>

<enhancement_workflow>

When improving an agent, diagnose through questions:

1. **"What's not working well?"** — Get specific symptoms
2. **"Can you show me an example where it failed?"** — Understand the gap
3. **"What should it have done instead?"** — Define success

Then propose targeted fixes:

| Symptom | Likely Cause | Fix |
|---------|--------------|-----|
| Triggers too often | Description too broad | Add specific contexts |
| Misses triggers | Description too narrow | Add trigger phrases |
| Wrong outputs | Prompt ambiguous | Add explicit instructions |
| Executes dangerous commands | Loose bash permissions | Restrict with patterns |
| Uses wrong skills | No skill restrictions | Configure `permission.skill` |

MUST show proposed changes and ask for confirmation before applying.

</enhancement_workflow>

<examples>

## Restricted Code Review Agent

```yaml
---
description: Safe code reviewer.
mode: primary
permission:
  edit: "ask"
  bash: "deny"
  write: "deny"
  external_directory: "deny"
---
You are a code review specialist. Analyze code for bugs, security issues,
and improvements. Never modify files directly.
```

## Deployment Agent (Any Other Agents)

```yaml
---
description: |-
  Deployment helper. Use when user says "deploy to staging", "push to prod", 
  "release version".
  
  Examples:
  - user: "deploy" -> run deployment
  - user: "release" -> run deployment
mode: subagent
permission:
  bash:
    "*": "deny"
    "git *": "allow"
    "npm run build": "allow"
    "npm run deploy:*": "ask"
  skill:
    "*": "deny"
    "deploy-checklist": "allow"
---
You are a deployment specialist...
```

</examples>

<quality_checklist>

Before showing the final agent to the user:

- [ ] Asked about core purpose and triggers
- [ ] Researched the domain (MUST NOT assume knowledge is current)
- [ ] `description` has concrete trigger examples
- [ ] `mode` discussed and set appropriately
- [ ] System prompt uses second person
- [ ] Asked about tool/permission needs (MUST NOT assume)
- [ ] Output format is specified if relevant
- [ ] Showed draft to user and got feedback
- [ ] User confirmed they're happy with result

</quality_checklist>

## References

- `references/agent-patterns.md` - Design patterns and prompt engineering
- `references/opencode-config.md` - Full frontmatter schema, tools, permissions