home / skills / shotaiuchi / dotclaude / design-skeptic

design-skeptic skill

/dotclaude/skills/design-skeptic

This skill stress-tests design proposals by challenging assumptions and exposing failure paths, edge cases, and overengineering risks to improve robustness.

npx playbooks add skill shotaiuchi/dotclaude --skill design-skeptic

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
1.9 KB
---
name: design-skeptic
description: >-
  Critical design analysis. Apply when stress-testing design proposals for
  hidden assumptions, failure scenarios, edge cases, single points of failure,
  and overengineering risks.
user-invocable: false
---

# Skeptic Perspective

Stress-test design proposals by questioning assumptions and exposing risks.

## Analysis Checklist

### Assumption Validation
- Identify implicit assumptions about user behavior or data patterns
- Challenge stated performance expectations with worst-case scenarios
- Verify that availability and reliability claims are backed by evidence
- Question whether the problem statement itself is correctly framed
- Check for optimistic bias in effort estimates and timelines

### Failure Scenarios
- Map single points of failure and their blast radius
- Verify graceful degradation paths for each critical component
- Check that cascading failure modes are identified and mitigated
- Assess disaster recovery and data loss scenarios

### Edge Cases & Limits
- Identify boundary conditions in data size, concurrency, and throughput
- Check behavior under empty, null, or malformed input conditions
- Verify handling of clock skew, network partitions, and race conditions
- Assess what happens at resource exhaustion (memory, disk, connections)

### Complexity Assessment
- Evaluate whether the design is overengineered for the actual requirements
- Check for unnecessary abstraction layers that add cognitive overhead
- Look for simpler alternatives that achieve the same goals
- Assess whether the complexity budget is justified by the problem scope

## Output Format

Report findings with strength ratings:

| Strength | Description |
|----------|-------------|
| Strong | Robust against failures, assumptions well-validated |
| Moderate | Some risks identified but manageable with mitigations |
| Weak | Critical assumptions unvalidated or major failure gaps |
| Neutral | Insufficient information to assess resilience |

Overview

This skill provides a skeptic-driven, checklist-based critique of design proposals to uncover hidden assumptions, failure modes, edge cases, and overengineering risks. It produces actionable findings and a resilience strength rating to guide remediation priorities. Use it when you want a practical, adversarial review that stresses systems and decisions rather than a celebratory evaluation.

How this skill works

The skill inspects design documents, architecture diagrams, and requirements to extract implicit assumptions and expected behaviors. It runs a structured checklist covering assumption validation, failure scenarios, edge conditions, and complexity, then reports issues with severity and a strength rating (Strong, Moderate, Weak, Neutral). The output focuses on concrete risks, suggested mitigations, and simple alternative approaches.

When to use it

  • Before committing to architecture choices or large implementation effort
  • During design reviews to reveal single points of failure and blast radius
  • When validating nonfunctional claims like availability, performance, or reliability
  • To test whether a design is overengineered relative to requirements
  • When preparing disaster recovery, failover, or capacity plans

Best practices

  • Provide complete context: goals, traffic expectations, SLAs, and failure tolerance
  • Include architecture diagrams, data flows, and operational runbooks where available
  • Prioritize findings by blast radius and likelihood to focus remediation
  • Ask for concrete metrics or test plans to validate optimistic claims
  • Use the strength rating to guide whether a deeper audit or experiments are needed

Example use cases

  • Evaluating a new microservice design for single points of failure and cascading failures
  • Stress-testing a data pipeline for malformed input, backpressure, and resource exhaustion
  • Assessing whether proposed high-availability measures actually cover network partitions and clock skew
  • Reviewing product feature specs for optimistic user-behavior assumptions and edge workflows
  • Deciding if a proposed architecture is excessive given the team’s operational capacity

FAQ

What does the strength rating mean?

The rating summarizes resilience: Strong means validated and robust; Moderate means manageable risks with mitigations; Weak means critical gaps; Neutral indicates missing information to assess.

Can this skill suggest fixes or only find problems?

It both identifies issues and recommends concrete mitigations or simpler alternatives, prioritizing high-impact, low-effort fixes.