home / skills / amnadtaowsoam / cerebraskills / problem-framing
This skill helps you detect vague requirements, pause for clarifications, and restate problems to ensure mutual understanding before implementation.
npx playbooks add skill amnadtaowsoam/cerebraskills --skill problem-framingReview the files below or copy the command above to add this skill to your agents.
---
name: Problem Framing & Ambiguity Resolution
description: Expert-level framework for detecting vague requirements, pausing execution to ask clarifying questions, and ensuring mutual understanding before implementation to reduce hallucinations and rework.
---
# Problem Framing & Ambiguity Resolution
## Overview
Problem framing is the critical first step before any implementation, teaching agents to detect vague or incomplete requirements, pause execution, and ask clarifying questions. This skill reduces hallucinations, prevents wasted effort, and improves first-shot success rate by ensuring the agent understands the problem before attempting to solve it. It provides systematic methods for ambiguity detection, question formulation, and problem restatement to establish clear understanding between humans and AI agents.
## Why This Matters
- **Increases First-shot Success**: Prevents wasted time on incorrect implementations by clarifying requirements upfront
- **Reduces Re-work**: Saves significant time by avoiding work on misunderstood requirements
- **Improves User Satisfaction**: Delivers outputs that match actual needs rather than assumptions
- **Enhances Communication**: Reduces back-and-forth by establishing clear understanding early
- **Supports AI-Human Collaboration**: Enables more effective collaboration between AI agents and human users
---
## Core Concepts
### 1. Ambiguity Detection
Identify vague or incomplete requirements using clear indicators:
**Lazy Prompt Indicators:**
- Generic verbs ("Fix it", "Make it work", "Update this")
- Missing context ("Add a feature" - what feature?)
- No constraints ("Optimize this code" - for what? speed? memory?)
- Ambiguous scope ("Improve the UI" - which part? what improvements?)
- Missing data ("Handle errors" - what errors? how?)
- Undefined success ("Make it better" - better than what? by what metric?)
**Detection Checklist:**
- What exactly needs to be done?
- What are the success criteria?
- What are the constraints (time, resources, scope)?
- What is the current state/context?
- What is the desired end state?
- Who are the stakeholders/users?
- What are the technical specifics?
- What information is missing?
### 2. Stop-and-Ask Protocol
Decision flow for handling ambiguous requests:
```
Receive Task → Is task clear?
├─ Yes → Proceed with Implementation
└─ No → Stop and Identify Gaps
→ Formulate Clarifying Questions
→ Present Options to User
→ Receive Clarification
→ Still ambiguous?
├─ Yes → Formulate more questions
└─ No → Restate Problem
→ Proceed with Implementation
```
**When to ALWAYS stop:**
- Prompt is under 20 words and lacks technical specifics
- Multiple interpretations exist
- Critical constraints are missing (performance, security, compatibility)
- Scope is undefined (what's included vs. excluded)
- Success criteria are absent
- Context is insufficient (no file paths, no code snippets, no error messages)
### 3. Question Formulation Principles
Good clarifying questions are:
- **Specific**: Target the exact missing information
- **Actionable**: The user can answer with concrete details
- **Non-leading**: Don't assume the answer
- **Prioritized**: Ask critical questions first
- **Multiple-choice**: Offer options when appropriate
**Question Templates:**
1. **A/B Testing Intent**: "Which of these options best matches what you need?"
2. **Constraint Clarification**: "What are the performance/security/compatibility requirements?"
3. **Success Criteria**: "How will we know when this is complete?"
4. **Context Gathering**: "Please share relevant code, error messages, or file paths"
### 4. Problem Restatement
After receiving clarification, ALWAYS restate the problem in your own words:
```markdown
## Problem Restatement
### Objective
[Clear, single-sentence statement of what needs to be accomplished]
### Requirements
- [ ] Requirement 1
- [ ] Requirement 2
### Constraints
- [ ] Constraint 1
- [ ] Constraint 2
### Success Criteria
- [ ] Success criterion 1
- [ ] Success criterion 2
**Please confirm this understanding is correct before I proceed.**
```
## Quick Start
1. **Receive Task**: Get the task or request from the user
2. **Detect Ambiguity**: Check for lazy prompt indicators - generic verbs, missing context, undefined scope, absent success criteria, insufficient context
3. **Stop and Identify Gaps**: If ambiguous, identify what's missing - what exactly needs to be done, what are success criteria, what are constraints
4. **Formulate Questions**: Create specific, actionable questions prioritized by importance - blockers first, then critical, then important
5. **Present to User**: Show questions with multiple-choice options when appropriate
6. **Receive Clarification**: Get additional details from the user
7. **Restate Problem**: Rewrite the problem in your own words and confirm understanding with user
8. **Proceed with Implementation**: Once confirmed, execute the solution according to clarified requirements
```typescript
// Example: Problem Restatement
interface ProblemStatement {
objective: string;
requirements: string[];
constraints: string[];
successCriteria: string[];
}
function restateProblem(
task: string,
clarifications: Clarification[]
): ProblemStatement {
return {
objective: deriveObjective(task, clarifications),
requirements: extractRequirements(clarifications),
constraints: extractConstraints(clarifications),
successCriteria: extractSuccessCriteria(clarifications)
};
}
```
## Production Checklist
- [ ] Task received and analyzed
- [ ] Ambiguity detection performed
- [ ] Lazy prompt indicators checked
- [ ] Missing information identified
- [ ] Clarifying questions formulated (specific, actionable, non-leading)
- [ ] Questions prioritized (blockers → critical → important → nice-to-have)
- [ ] Multiple-choice options provided when appropriate
- [ ] User clarifications received
- [ ] Problem restated in own words
- [ ] Understanding confirmed with user
- [ ] Implementation proceeds only after confirmation
## Anti-patterns
1. **Over-clarifying**: Asking for details that don't affect the solution wastes time
2. **Leading Questions**: Suggesting answers limits user's ability to provide needed information
3. **Skipping Restatement**: Always confirm understanding to avoid misalignment
4. **Assuming Intent**: Don't guess what the user wants - ask explicitly
5. **Proceeding with Ambiguity**: If unsure, ask rather than guess and risk wasted work
6. **Ignoring Context**: Use available context, but don't assume beyond what's provided
7. **Asking Too Many Questions**: Prioritize and group related questions to avoid overwhelming users
## Integration Points
- **AI Model Integration**: Use with GPT-4, Claude, and other LLMs for systematic problem understanding
- **Prompt Engineering**: Templates integrate with LangChain, PromptPerfect, PromptBase
- **Documentation Platforms**: Store problem restatements in Confluence, Notion, GitHub Wiki
- **Collaboration Tools**: Use Slack, Microsoft Teams, Discord for clarification discussions
- **Version Control**: Track problem statements and clarifications in Git
- **Testing Tools**: Validate clarified requirements with Jest, PyTest, Playwright tests
## Further Reading
- [The Art of Asking Questions](https://medium.com/@davidlee/the-art-of-asking-questions-2d9b5b4b5f6b)
- [Requirements Engineering Best Practices](https://ieeexplore.ieee.org/document/5558220)
- [Ambiguity in Software Requirements](https://dl.acm.org/doi/10.1145/3183440.3183446)
- [Prompt Engineering Guide](https://www.promptingguide.ai/)
- [AI Agent Design Patterns](https://arxiv.org/abs/2308.11432)
---
## Detecting Ambiguity
### Lazy Prompt Indicators
A "lazy prompt" is a request that lacks sufficient context, constraints, or specificity:
| Pattern | Example | Risk Level |
|---------|---------|------------|
| **Generic verbs** | "Fix it", "Make it work", "Update this" | High |
| **Missing context** | "Add a feature" (what feature?) | High |
| **No constraints** | "Optimize this code" (for what? speed? memory?) | Medium |
| **Ambiguous scope** | "Improve the UI" (which part? what improvements?) | Medium |
| **Missing data** | "Handle errors" (what errors? how?) | High |
| **Undefined success** | "Make it better" (better than what? by what metric?) | High |
### Ambiguity Detection Checklist
Before proceeding with any task, verify:
```markdown
## Ambiguity Detection Checklist
### Input Clarity
- [ ] What exactly needs to be done?
- [ ] What are the success criteria?
- [ ] What are the constraints (time, resources, scope)?
### Context Understanding
- [ ] What is the current state/context?
- [ ] What is the desired end state?
- [ ] Who are the stakeholders/users?
### Technical Specificity
- [ ] What technologies/frameworks are involved?
- [ ] What are the performance requirements?
- [ ] What are the edge cases to consider?
### Missing Information
- [ ] What assumptions am I making?
- [ ] What information is missing?
- [ ] What dependencies exist?
```
## The Stop-and-Ask Protocol
### Decision Flow
```mermaid
graph TD
A[Receive Task] --> B{Is task clear?}
B -->|Yes| C[Proceed with Implementation]
B -->|No| D[Stop and Identify Gaps]
D --> E[Formulate Clarifying Questions]
E --> F[Present Options to User]
F --> G[Receive Clarification]
G --> H{Still ambiguous?}
H -->|Yes| E
H -->|No| I[Restate Problem]
I --> C
```
### When to Stop
**ALWAYS stop and ask when:**
1. **The prompt is under 20 words** and lacks technical specifics
2. **Multiple interpretations exist** for the same request
3. **Critical constraints are missing** (performance, security, compatibility)
4. **The scope is undefined** (what's included vs. excluded)
5. **Success criteria are absent** (how do we know it's done?)
6. **Context is insufficient** (no file paths, no code snippets, no error messages)
### When to Proceed
**You may proceed when:**
1. **The request is specific** with clear deliverables
2. **Constraints are stated** (explicit or implied by context)
3. **Success is measurable** (tests, metrics, observable behavior)
4. **Context is sufficient** (files, code, environment details provided)
## Formulating Clarifying Questions
### Question Formulation Principles
**Good clarifying questions are:**
- **Specific** - Target the exact missing information
- **Actionable** - The user can answer with concrete details
- **Non-leading** - Don't assume the answer
- **Prioritized** - Ask critical questions first
- **Multiple-choice** - Offer options when appropriate
### Question Templates
#### Template 1: A/B Testing Intent
```markdown
I want to clarify the scope of this request. Which of these best matches what you need?
**Option A:** [Specific interpretation A with details]
**Option B:** [Specific interpretation B with details]
**Option C:** [Specific interpretation C with details]
Please let me know which option is correct, or provide additional details if none match.
```
#### Template 2: Constraint Clarification
```markdown
To implement this correctly, I need to understand the constraints:
- **Performance:** Are there specific latency/throughput requirements?
- **Compatibility:** Which browsers/versions must be supported?
- **Security:** Are there specific security requirements or compliance standards?
- **Resources:** Are there limitations on memory, storage, or external services?
Please provide any relevant constraints, or confirm if standard practices apply.
```
#### Template 3: Success Criteria
```markdown
To ensure I deliver what you need, help me define success:
1. **What should happen when this works correctly?**
2. **What should NOT happen (edge cases to avoid)?**
3. **How will this be tested or verified?**
4. **What observable behavior confirms completion?**
Please provide specific examples or expected outcomes.
```
#### Template 4: Context Gathering
```markdown
I need more context to provide the best solution:
- **Current state:** What does the existing implementation look like?
- **Problem statement:** What specific issue are we solving?
- **Use case:** How will this feature/functionality be used?
- **Related code:** Are there other files or components I should consider?
Please share relevant code snippets, file paths, or error messages.
```
### Question Prioritization
Ask questions in this order:
1. **Blockers** - Information that absolutely prevents progress
2. **Critical** - Information that significantly affects the approach
3. **Important** - Information that refines the implementation
4. **Nice-to-have** - Information that improves quality but isn't essential
## Problem Restatement
### The Restatement Protocol
After receiving clarification, **always** restate the problem in your own words before proceeding. This ensures mutual understanding and catches any remaining ambiguities.
### Restatement Template
```markdown
## Problem Restatement
Based on our discussion, I understand the task as follows:
### Objective
[Clear, single-sentence statement of what needs to be accomplished]
### Requirements
- [ ] Requirement 1
- [ ] Requirement 2
- [ ] Requirement 3
### Constraints
- [ ] Constraint 1
- [ ] Constraint 2
### Success Criteria
- [ ] Success criterion 1
- [ ] Success criterion 2
### Approach
[High-level description of how I plan to solve this]
**Please confirm this understanding is correct before I proceed.**
```
## Common Ambiguity Patterns
### Pattern 1: "Fix This"
**Ambiguity:** What's broken? What's the expected behavior?
**Clarifying Questions:**
```markdown
To help you fix this, I need more information:
1. **What is the current behavior?** (What's happening now?)
2. **What is the expected behavior?** (What should happen instead?)
3. **When does this occur?** (Specific steps, conditions, or inputs)
4. **Are there any error messages?** (Please share the full error)
5. **What have you tried so far?** (To avoid repeating efforts)
Please provide as much detail as possible.
```
### Pattern 2: "Optimize This"
**Ambiguity:** Optimize for what? Speed? Memory? Code size? Readability?
**Clarifying Questions:**
```markdown
To optimize this effectively, I need to understand the goal:
**What aspect needs optimization?**
- [ ] Execution speed (reduce runtime)
- [ ] Memory usage (reduce RAM consumption)
- [ ] Code size (reduce bundle/bytecode size)
- [ ] Maintainability (improve code clarity)
- [ ] Scalability (handle more load)
**What are the current metrics?**
- Current runtime: ___
- Current memory usage: ___
- Current bundle size: ___
**What are the target metrics?**
- Target runtime: ___
- Target memory usage: ___
- Target bundle size: ___
```
### Pattern 3: "Add a Feature"
**Ambiguity:** What feature? Where? How should it behave?
**Clarifying Questions:**
```markdown
To add this feature correctly, I need details:
**Feature Description**
- What should the feature do?
- Where should it be accessible (UI, API, etc.)?
- Who can use it (authentication/authorization)?
**User Experience**
- How does the user interact with it?
- What are the expected inputs/outputs?
- Are there any edge cases or error conditions?
**Technical Requirements**
- Are there existing components to integrate with?
- Any specific technologies or libraries to use?
- Are there performance requirements?
Please provide a detailed description or mockup if available.
```
### Pattern 4: "Make It Better"
**Ambiguity:** Better in what way? By what metric?
**Clarifying Questions:**
```markdown
To improve this, I need to understand what "better" means:
**Which aspect should be improved?**
- [ ] Performance (speed, efficiency)
- [ ] User experience (usability, accessibility)
- [ ] Code quality (maintainability, readability)
- [ ] Reliability (error handling, robustness)
- [ ] Security (vulnerabilities, compliance)
- [ ] Design (aesthetics, consistency)
**What is the current issue?**
- What problems are users experiencing?
- What feedback have you received?
- What metrics indicate a problem?
**What is the desired outcome?**
- What should improve and by how much?
- How will we measure success?
Please provide specific examples or metrics.
```
## Quick Reference
### Decision Tree
```
Is the request clear?
├─ Yes → Proceed with implementation
└─ No → Stop and identify gaps
├─ Is it a lazy prompt?
│ └─ Ask for specific details
├─ Are constraints missing?
│ └─ Ask for performance/security/compatibility requirements
├─ Is scope undefined?
│ └─ Ask what's included/excluded
└─ Is success criteria absent?
└─ Ask how to measure completion
```
### Question Templates
| Situation | Template |
|-----------|----------|
| **Multiple interpretations** | "Which of these options best matches your need: A, B, or C?" |
| **Missing constraints** | "What are the performance/security/compatibility requirements?" |
| **Undefined scope** | "What should be included vs. excluded from this change?" |
| **No success criteria** | "How will we know when this is complete?" |
| **Insufficient context** | "Please share relevant code, error messages, or file paths" |
| **Ambiguous "better"** | "Better in what way? Speed, UX, code quality, etc.?" |
## Best Practices
1. **Ask Early** - Detect ambiguity before starting implementation
2. **Be Specific** - Target exact missing information with questions
3. **Offer Options** - Provide multiple-choice when appropriate
4. **Prioritize** - Ask blockers first, then critical, then nice-to-have
5. **Restate Always** - Confirm understanding before proceeding
6. **Use Context** - Leverage available information, but don't assume beyond it
7. **Be Concise** - Don't overwhelm with too many questions at once
8. **Stay Focused** - Ask questions relevant to solving the problem
9. **Document** - Keep record of clarifications and problem statements
10. **Iterate** - Continue clarifying until understanding is clear
## Common Pitfalls
1. **Assuming intent** - Don't guess what the user wants; ask
2. **Over-clarifying** - Don't ask for details that don't affect the solution
3. **Leading questions** - Don't suggest answers; let users provide them
4. **Skipping restatement** - Always restate the problem after clarification
5. **Proceeding with ambiguity** - If unsure, ask rather than guess
6. **Ignoring context** - Use available context, but don't assume
7. **Asking too many questions** - Prioritize and group related questions
8. **Not offering options** - When appropriate, provide multiple-choice options
This skill teaches an expert-level framework for detecting vague or incomplete requirements, pausing execution, and asking focused clarifying questions before implementation. It reduces hallucinations and rework by ensuring mutual understanding between the agent and the user. The skill provides concrete checklists, question templates, and a restatement protocol to confirm correctness before proceeding.
The skill inspects incoming requests for lazy-prompt indicators (generic verbs, missing context, undefined success criteria, ambiguous scope) and runs an ambiguity-detection checklist. If gaps are found it triggers a Stop-and-Ask protocol: identify blockers, generate prioritized clarifying questions (often multiple-choice), present options, collect clarifications, and then restate the problem in a structured format for confirmation. Only after confirmation does execution proceed.
How many questions should I ask before proceeding?
Prioritize blockers and critical items. Ask the minimum set that prevents mistaken work; group related items to avoid overwhelming the user.
What if the user still gives vague answers?
Iterate the Stop-and-Ask loop: refine questions to be more specific or offer concrete options, then restate the result and request explicit confirmation before proceeding.