home / skills / martinffx / claude-code-atelier / atelier-challenge

atelier-challenge skill

/plugins/atelier-oracle/skills/atelier-challenge

This skill helps you challenge assumptions and validate decisions using a structured critical thinking framework to prevent automatic agreement.

npx playbooks add skill martinffx/claude-code-atelier --skill atelier-challenge

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
3.9 KB
---
name: atelier-challenge
description: Challenge an approach with critical thinking. Use when questioning assumptions, validating decisions, testing approach validity, or preventing automatic agreement.
user-invocable: false
---

# Challenge: Critical Thinking Prompt

## Step 1: Parse Challenge Request

<strategist>
@agent-oracle

Analyze the challenge request: $ARGUMENTS

**Challenge Extraction:**
- **Core concern**: Extract the main doubt or question
- **Target approach**: Identify what is being challenged
- **Context**: Relevant background from current session
- **Specific aspects**: Particular elements to question

**Challenge Summary:**
You're challenging: [identified approach]
Because: [extracted concern]
In context of: [session context]
</strategist>

## Step 2: Set Up Critical Thinking Framework

<framework>
**What to Question:**
- **Underlying assumptions**: What beliefs support this approach?
- **Evidence base**: What data or experience validates it?
- **Context fit**: How well does it work in your specific situation?
- **Alternatives considered**: What other options were explored?
- **Risk factors**: What could go wrong with this approach?

**Critical Thinking Prompts:**
- Is this approach solving the right problem?
- Are the underlying assumptions still valid?
- What evidence contradicts this direction?
- How does this fit with your constraints and goals?
- What are the opportunity costs?
</framework>

## Step 3: Sequential Thinking Analysis

<strategist>
@agent-oracle

Use sequential thinking (mcp__sequential-thinking__sequentialthinking) to analyze this challenge:

**Thought 1**: Question the fundamental assumptions
**Thought 2**: Examine contradictory evidence
**Thought 3**: Explore alternative approaches
**Thought 4**: Assess context-specific fit
**Thought 5**: Evaluate risks and trade-offs
**Thought 6**: Synthesize findings into recommendation

Build systematically through evidence, alternatives, and risks.
Continue until you reach a clear conclusion.
</strategist>

## Step 4: Critical Evaluation Output

**Self-Critique Questions:**
- Does the analysis address the user's specific context?
- Are the recommendations practical and implementable?
- Have we considered the most important constraints?
- Are there any blind spots or missing perspectives?

**Final Synthesis:**
- **Assumption validity**: Are the underlying assumptions sound?
- **Evidence assessment**: Does evidence support or contradict?
- **Alternative recommendation**: If current approach is problematic, what instead?
- **Risk mitigation**: How to address identified concerns?

---

## Usage Examples

**Challenge Technical Decisions:**
```
/atelier-challenge "Do we really need a microservices architecture for this simple app?"
```

**Challenge Implementation Approach:**
```
/atelier-challenge "I think this caching strategy will actually slow things down"
```

**Challenge Requirements:**
```
/atelier-challenge "Are we solving the right problem with this feature?"
```

**Challenge Architectural Patterns:**
```
/atelier-challenge "Should we really use event sourcing for this use case?"
```

## When to Use Challenge

**Before Major Decisions:**
- Architecture choices
- Technology stack decisions
- Design pattern selection
- Implementation approach

**When Something Feels Off:**
- "This seems overly complex"
- "I'm not sure this solves the real problem"
- "This approach feels wrong"
- "Are we over-engineering this?"

**To Prevent Automatic Agreement:**
- When you want genuine critical evaluation
- When you need to challenge conventional wisdom
- When you want to test your own assumptions

## Challenge vs ThinkDeep

**Use /atelier-challenge**: Question assumptions, test validity, assess risks, prevent automatic agreement
**Use /atelier-thinkdeep**: Deep exploration, comprehensive analysis, alternative discovery, complex decisions

**Key distinction**: Challenge = critical evaluation, ThinkDeep = deep exploration

Overview

This skill challenges an approach using disciplined critical thinking. It helps teams and individuals question assumptions, validate decisions, test approach validity, and avoid automatic agreement before committing to major technical or product choices.

How this skill works

The skill parses a short challenge request to extract the core concern, the target approach, and relevant context. It applies a structured framework that questions assumptions, inspects evidence, compares alternatives, and evaluates risks, then synthesizes findings into actionable recommendations and mitigations.

When to use it

  • Before committing to major architecture or technology choices
  • When a design or implementation feels overly complex or risky
  • To validate whether a proposed solution actually solves the right problem
  • When you want an independent critical review to avoid groupthink
  • When comparing competing approaches and needing a trade-off analysis

Best practices

  • Provide concise context and the specific approach you want challenged
  • Include constraints, goals, and any supporting evidence or metrics
  • Ask for alternatives if you want replacement options, not just critique
  • Treat findings as inputs for discussion—combine with domain expertise
  • Use the recommendations to define experiments or decision checkpoints

Example use cases

  • Questioning whether a microservices split is justified for a small app
  • Challenging a proposed caching strategy that may degrade latency
  • Assessing whether event sourcing is a good fit for a given domain
  • Validating that a new feature addresses the underlying user problem
  • Reviewing a vendor or library choice for hidden operational risks

FAQ

What output should I expect?

You get a concise challenge summary, an analysis of assumptions and evidence, alternative options, risk assessment, and practical recommendations.

How much context do I need to provide?

A short description of the approach, goals, constraints, and any key data is usually enough to produce a useful critique.