home / skills / martinffx / claude-code-atelier / atelier-oracle-challenge

This skill helps you challenge assumptions and validate decisions using structured critical thinking prompts to ensure robust, context-aware solutions.

npx playbooks add skill martinffx/claude-code-atelier --skill atelier-oracle-challenge

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
3.9 KB
---
name: atelier-oracle-challenge
description: Challenge an approach with critical thinking. Use when questioning assumptions, validating decisions, testing approach validity, or preventing automatic agreement.
user-invocable: false
---

# Challenge: Critical Thinking Prompt

## Step 1: Parse Challenge Request

<strategist>
@agent-oracle

Analyze the challenge request: $ARGUMENTS

**Challenge Extraction:**
- **Core concern**: Extract the main doubt or question
- **Target approach**: Identify what is being challenged
- **Context**: Relevant background from current session
- **Specific aspects**: Particular elements to question

**Challenge Summary:**
You're challenging: [identified approach]
Because: [extracted concern]
In context of: [session context]
</strategist>

## Step 2: Set Up Critical Thinking Framework

<framework>
**What to Question:**
- **Underlying assumptions**: What beliefs support this approach?
- **Evidence base**: What data or experience validates it?
- **Context fit**: How well does it work in your specific situation?
- **Alternatives considered**: What other options were explored?
- **Risk factors**: What could go wrong with this approach?

**Critical Thinking Prompts:**
- Is this approach solving the right problem?
- Are the underlying assumptions still valid?
- What evidence contradicts this direction?
- How does this fit with your constraints and goals?
- What are the opportunity costs?
</framework>

## Step 3: Sequential Thinking Analysis

<strategist>
@agent-oracle

Use sequential thinking (mcp__sequential-thinking__sequentialthinking) to analyze this challenge:

**Thought 1**: Question the fundamental assumptions
**Thought 2**: Examine contradictory evidence
**Thought 3**: Explore alternative approaches
**Thought 4**: Assess context-specific fit
**Thought 5**: Evaluate risks and trade-offs
**Thought 6**: Synthesize findings into recommendation

Build systematically through evidence, alternatives, and risks.
Continue until you reach a clear conclusion.
</strategist>

## Step 4: Critical Evaluation Output

**Self-Critique Questions:**
- Does the analysis address the user's specific context?
- Are the recommendations practical and implementable?
- Have we considered the most important constraints?
- Are there any blind spots or missing perspectives?

**Final Synthesis:**
- **Assumption validity**: Are the underlying assumptions sound?
- **Evidence assessment**: Does evidence support or contradict?
- **Alternative recommendation**: If current approach is problematic, what instead?
- **Risk mitigation**: How to address identified concerns?

---

## Usage Examples

**Challenge Technical Decisions:**
```
/atelier-challenge "Do we really need a microservices architecture for this simple app?"
```

**Challenge Implementation Approach:**
```
/atelier-challenge "I think this caching strategy will actually slow things down"
```

**Challenge Requirements:**
```
/atelier-challenge "Are we solving the right problem with this feature?"
```

**Challenge Architectural Patterns:**
```
/atelier-challenge "Should we really use event sourcing for this use case?"
```

## When to Use Challenge

**Before Major Decisions:**
- Architecture choices
- Technology stack decisions
- Design pattern selection
- Implementation approach

**When Something Feels Off:**
- "This seems overly complex"
- "I'm not sure this solves the real problem"
- "This approach feels wrong"
- "Are we over-engineering this?"

**To Prevent Automatic Agreement:**
- When you want genuine critical evaluation
- When you need to challenge conventional wisdom
- When you want to test your own assumptions

## Challenge vs ThinkDeep

**Use /atelier-challenge**: Question assumptions, test validity, assess risks, prevent automatic agreement
**Use /atelier-thinkdeep**: Deep exploration, comprehensive analysis, alternative discovery, complex decisions

**Key distinction**: Challenge = critical evaluation, ThinkDeep = deep exploration

Overview

This skill provides a structured critical-thinking challenger for technical decisions and implementation plans. It helps teams and developers surface hidden assumptions, test evidence, and evaluate risks before committing to architecture, patterns, or design choices. Use it to prevent automatic agreement and to get concise, actionable pushback tailored to your context.

How this skill works

You submit a concise challenge prompt describing the approach or decision you want questioned. The skill parses the request to extract the core concern, the target approach, and relevant context, then runs a sequential critical-thinking analysis through a set of focused prompts. The output highlights assumption validity, contradictory evidence, alternatives, risk trade-offs, and a recommended next step or mitigations.

When to use it

  • Before major decisions: architecture, stacks, or pattern selection
  • When implementation or design feels overly complex or misaligned
  • To validate assumptions behind performance, scaling, or security choices
  • When you need independent, rigorous pushback to avoid groupthink
  • Prior to committing to costly migrations or long-lived infrastructures

Best practices

  • Provide concise context: goals, constraints, and current approach
  • Include specific metrics or evidence if available (performance numbers, error rates)
  • Frame the prompt as a single decision or assumption to keep analysis focused
  • Ask for alternatives and risk mitigations, not just criticism
  • Use iteratively: run the challenge early, then re-run after changes

Example use cases

  • Questioning whether microservices are justified for a small app and asking for simpler alternatives
  • Challenging a proposed caching strategy that may degrade latency under load
  • Testing whether event sourcing is appropriate for a given domain and team skillset
  • Validating whether a chosen persistence model will meet future scalability needs
  • Evaluating trade-offs of adopting a new language or framework across the codebase

FAQ

What input yields the best challenge result?

A short prompt describing the decision, the expected benefit, constraints (time, team, budget), and any supporting data produces the most targeted analysis.

Is this a replacement for design review meetings?

No. Use this skill to strengthen design reviews by surfacing blind spots and concrete alternatives; follow up with team discussion for alignment and implementation planning.