home / skills / 2389-research / claude-plugins / omakase-off

omakase-off skill

/test-kitchen/skills/omakase-off

This skill facilitates chef's-choice exploration by generating and evaluating multiple architectural variants in parallel to identify the best approach.

npx playbooks add skill 2389-research/claude-plugins --skill omakase-off

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
4.8 KB
---
name: omakase-off
description: This skill should be used as the entry gate for build/create/implement requests. Triggers on "build X", "create Y", "implement Z", "add feature", "try both approaches", "not sure which approach". Offers brainstorm-together or omakase (chef's choice parallel exploration) options. Detects indecision during brainstorming to offer parallel exploration.
---

# Omakase-Off

Chef's choice exploration - when you're not sure WHAT to build, explore different approaches in parallel.

**Part of Test Kitchen Development:**
- `omakase-off` - Chef's choice exploration (different approaches/plans)
- `cookoff` - Same recipe, multiple cooks compete (same plan, multiple implementations)

**Core principle:** Let indecision emerge naturally during brainstorming, then implement multiple approaches in parallel to let real code + tests determine the best solution.

## Three Triggers

### Trigger 1: BEFORE Brainstorming

**When:** "I want to build...", "Create a...", "Implement...", "Add a feature..."

**Present:**
```
Before we brainstorm the details, would you like to:

1. Brainstorm together - We'll explore requirements and design step by step
2. Omakase (chef's choice) - I'll generate 3-5 best approaches, implement them
   in parallel, and let tests pick the winner
```

### Trigger 2: DURING Brainstorming (Indecision Detection)

**Detection signals:**
- 2+ uncertain responses in a row on architectural decisions
- Phrases: "not sure", "don't know", "either works", "you pick", "no preference"

**When detected:**
```
You seem flexible on the approach. Would you like to:
1. I'll pick what seems best and continue brainstorming
2. Explore multiple approaches in parallel (omakase-off)
```

### Trigger 3: Explicitly Requested

- "try both approaches", "explore both", "omakase"
- "implement both variants", "let's see which is better"

## Workflow Overview

| Phase | Description |
|-------|-------------|
| **0. Entry** | Present brainstorm vs omakase choice |
| **1. Brainstorm** | Passive slot detection during design |
| **1.5. Decision** | If slots detected, offer parallel exploration |
| **2. Plan** | Generate implementation plan per variant |
| **3. Implement** | Dispatch ALL agents in SINGLE message |
| **4. Evaluate** | Scenario tests → fresh-eyes → judge survivors |
| **5. Complete** | Finish winner, cleanup losers |

See `references/detailed-workflow.md` for full phase details.

## Directory Structure

```
docs/plans/<feature>/
  design.md                  # Shared context from brainstorming
  omakase/
    variant-<slug>/
      plan.md                # Implementation plan for this variant
    result.md                # Final report

.worktrees/
  variant-<slug>/            # Omakase variant worktree
```

## Slot Classification

| Type | Examples | Worth exploring? |
|------|----------|------------------|
| **Architectural** | Storage engine, framework, auth method | Yes |
| **Trivial** | File location, naming, config format | No |

Only architectural decisions become slots for parallel exploration.

## Variant Limits

**Max 5-6 implementations.** Don't do full combinatorial explosion:
1. Identify the primary axis (biggest architectural impact)
2. Create variants along that axis
3. Fill secondary slots with natural pairings

## Critical Rules

1. **Dispatch ALL variants in SINGLE message** - Multiple Task tools, one message
2. **MUST use scenario-testing** - Not manual verification
3. **Fresh-eyes on survivors** - Required before judge comparison
4. **Always cleanup losers** - Remove worktrees and branches
5. **Write result.md** - Document what was tried and why winner won

## Skills Orchestrated

| Dependency | Usage |
|------------|-------|
| `brainstorming` | Modified flow with passive slot detection |
| `writing-plans` | Generate implementation plan per variant |
| `git-worktrees` | Create isolated worktree per variant |
| `parallel-agents` | Dispatch all variant subagents in parallel |
| `scenario-testing` | Run same scenarios against all variants |
| `fresh-eyes` | Quality review on survivors → input for judge |
| `finish-branch` | Handle winner (merge/PR), cleanup losers |

## Example Flow

```
User: "I need to build a CLI todo app."

Claude: [Triggers omakase-off]
Before we dive in, how would you like to approach this?
1. Brainstorm together
2. Omakase (chef's choice)

User: "1"

Claude: [Brainstorming proceeds, detects indecision on storage]

You seem flexible on storage (JSON vs SQLite). Would you like to:
1. Explore in parallel - I'll implement both variants
2. Best guess - I'll pick JSON (simpler)

User: "1"

[Creates plans for variant-json, variant-sqlite]
[Dispatches parallel agents in SINGLE message]
[Runs scenario tests on both]
[Fresh-eyes review on survivors]
[Presents comparison, user picks winner]
[Cleans up loser, finishes winner branch]
```

Overview

This skill is the entry gate for build/create/implement requests that guides users between stepwise brainstorming and a parallel "omakase" exploration of multiple implementation approaches. It detects indecision during planning and, when appropriate, offers to run 3–5 variant implementations in parallel, using scenario tests to determine the best result. The goal is to let real code and tests decide the winner while keeping the user in control.

How this skill works

On initial build/create/implement prompts the skill presents a choice: brainstorm together or run omakase (chef's choice parallel exploration). During brainstorming it passively detects indecision signals (repeated uncertain replies or phrases like "not sure" or "you pick") and offers parallel exploration. If omakase is chosen, it designs multiple variants, dispatches all variant agents in a single message, runs identical scenario tests against each variant, performs fresh-eyes reviews on survivors, and presents a judged winner with cleanup of losers.

When to use it

  • When the user asks to build, create, implement, or add a feature and is unsure about key architectural choices
  • When brainstorming reveals repeated uncertainty on major decisions (storage, framework, auth)
  • When the cost of trying multiple approaches is justified by improved confidence or measurable outcomes
  • When you want automated comparison by scenario tests rather than relying on guesswork
  • When the user explicitly requests exploring both/multiple approaches

Best practices

  • Limit parallel variants to 3–5 focusing on the primary architectural axis to avoid combinatorial explosion
  • Classify slots: only treat architectural decisions as parallelizable; keep trivial choices consistent across variants
  • Dispatch all variant work in a single message to enable parallel agents and reproducible orchestration
  • Always run identical scenario tests for fair comparison and require a fresh-eyes review before final judgment
  • Document plans, results, and why the winner was chosen; clean up loser branches/worktrees afterward

Example use cases

  • User wants a CLI tool but is unsure about storage (JSON vs SQLite vs remote DB) — run 3 parallel variants and let tests pick
  • Implement two competing auth strategies (JWT vs OAuth) to compare performance and developer ergonomics
  • Compare two framework options (lightweight vs batteries-included) for a microservice by implementing both and running the same integration tests
  • When brainstorming stalls on architecture, offer omakase to unblock decision-making by empirical evaluation

FAQ

What triggers omakase-off automatically?

Prompts like "build", "create", "implement", or detected indecision during brainstorming will surface the omakase choice.

How many variants will you create?

Defaults to 3–5 variants focused on the primary architectural axis; never do full combinatorial expansion.

How is the winner chosen?

All variants run the same scenario tests; survivors get a fresh-eyes review and a judge compares results and quality before declaring a winner.