home / skills / proffesor-for-testing / agentic-qe / six-thinking-hats
This skill applies Six Thinking Hats to testing, guiding structured analysis, risk discovery, and actionable improvements across quality activities.
npx playbooks add skill proffesor-for-testing/agentic-qe --skill six-thinking-hatsReview the files below or copy the command above to add this skill to your agents.
---
name: six-thinking-hats
description: "Apply Edward de Bono's Six Thinking Hats methodology to software testing for comprehensive quality analysis. Use when designing test strategies, conducting test retrospectives, analyzing test failures, evaluating testing approaches, or facilitating testing discussions. Each hat provides a distinct testing perspective: facts (White), risks (Black), benefits (Yellow), creativity (Green), emotions (Red), and process (Blue)."
category: methodology
priority: medium
tokenEstimate: 1100
agents: [qe-quality-analyzer, qe-regression-risk-analyzer, qe-test-generator]
implementation_status: optimized
optimization_version: 1.0
last_optimized: 2025-12-03
dependencies: []
quick_reference_card: true
tags: [thinking, methodology, decision-making, collaboration, analysis]
---
# Six Thinking Hats for Testing
<default_to_action>
When analyzing testing decisions:
1. DEFINE focus clearly (specific testing question)
2. APPLY each hat sequentially (5 min each)
3. DOCUMENT insights per hat
4. SYNTHESIZE into action plan
**Quick Hat Rotation (30 min):**
```markdown
π€ WHITE (5 min) - Facts only: metrics, data, coverage
β€οΈ RED (3 min) - Gut feelings (no justification needed)
π€ BLACK (7 min) - Risks, gaps, what could go wrong
π YELLOW (5 min) - Strengths, opportunities, what works
π GREEN (7 min) - Creative ideas, alternatives
π΅ BLUE (3 min) - Action plan, next steps
```
**Example for "API Test Strategy":**
- π€ 47 endpoints, 30% coverage, 12 integration tests
- β€οΈ Anxious about security, confident on happy paths
- π€ No auth tests, rate limiting untested, edge cases missing
- π Good docs, CI/CD integrated, team experienced
- π Contract testing with Pact, chaos testing, property-based
- π΅ Security tests first, contract testing next sprint
</default_to_action>
## Quick Reference Card
### The Six Hats
| Hat | Focus | Key Question |
|-----|-------|--------------|
| π€ **White** | Facts & Data | What do we KNOW? |
| β€οΈ **Red** | Emotions | What do we FEEL? |
| π€ **Black** | Risks | What could go WRONG? |
| π **Yellow** | Benefits | What's GOOD? |
| π **Green** | Creativity | What ELSE could we try? |
| π΅ **Blue** | Process | What should we DO? |
### When to Use Each Hat
| Hat | Use For |
|-----|---------|
| π€ White | Baseline metrics, test data inventory |
| β€οΈ Red | Team confidence check, quality gut feel |
| π€ Black | Risk assessment, gap analysis, pre-mortems |
| π Yellow | Strengths audit, quick win identification |
| π Green | Test innovation, new approaches, brainstorming |
| π΅ Blue | Strategy planning, retrospectives, decision-making |
---
## Hat Details
### π€ White Hat - Facts & Data
**Output: Quantitative testing baseline**
Questions:
- What test coverage do we have?
- What is our pass/fail rate?
- What environments exist?
- What is our defect history?
```
Example Output:
Coverage: 67% line, 45% branch
Test Suite: 1,247 unit, 156 integration, 23 E2E
Execution Time: Unit 3min, Integration 12min, E2E 45min
Defects: 23 open (5 critical, 8 major, 10 minor)
```
### π€ Black Hat - Risks & Cautions
**Output: Comprehensive risk assessment**
Questions:
- What could go wrong in production?
- What are we NOT testing?
- What assumptions might be wrong?
- Where are the coverage gaps?
```
HIGH RISKS:
- No load testing (production outage risk)
- Auth edge cases untested (security vulnerability)
- Database failover never tested (data loss risk)
```
### π Yellow Hat - Benefits & Optimism
**Output: Strengths and opportunities**
Questions:
- What's working well?
- What strengths can we leverage?
- What quick wins are available?
```
STRENGTHS:
- Strong CI/CD pipeline
- Team expertise in automation
- Stakeholders value quality
QUICK WINS:
- Add smoke tests (reduce incidents)
- Automate manual regression (save 2 days/release)
```
### π Green Hat - Creativity
**Output: Innovative testing ideas**
Questions:
- How else could we test this?
- What if we tried something completely different?
- What emerging techniques could we adopt?
```
IDEAS:
1. AI-powered test generation
2. Chaos engineering for resilience
3. Property-based testing for edge cases
4. Production traffic replay
5. Synthetic monitoring
```
### β€οΈ Red Hat - Emotions
**Output: Team gut feelings (NO justification needed)**
Questions:
- How confident do you feel about quality?
- What makes you anxious?
- What gives you confidence?
```
FEELINGS:
- Confident: Unit tests, API tests
- Anxious: Authentication flow, payment processing
- Frustrated: Flaky tests, slow E2E suite
```
### π΅ Blue Hat - Process
**Output: Action plan with owners and timelines**
Questions:
- What's our strategy?
- How should we prioritize?
- What's the next step?
```
PRIORITIZED ACTIONS:
1. [Critical] Address security testing gap - Owner: Alice
2. [High] Implement contract testing - Owner: Bob
3. [Medium] Reduce flaky tests - Owner: Carol
```
---
## Session Templates
### Solo Session (30 min)
```markdown
# Six Hats Analysis: [Topic]
## π€ White Hat (5 min)
Facts: [list metrics, data]
## β€οΈ Red Hat (3 min)
Feelings: [gut reactions, no justification]
## π€ Black Hat (7 min)
Risks: [what could go wrong]
## π Yellow Hat (5 min)
Strengths: [what works, opportunities]
## π Green Hat (7 min)
Ideas: [creative alternatives]
## π΅ Blue Hat (3 min)
Actions: [prioritized next steps]
```
### Team Session (60 min)
- Each hat: 10 minutes
- Rotate through hats as group
- Document on shared whiteboard
- Blue Hat synthesizes at end
---
## Agent Integration
```typescript
// Risk-focused analysis (Black Hat)
const risks = await Task("Identify Risks", {
scope: 'payment-module',
perspective: 'black-hat',
includeMitigation: true
}, "qe-regression-risk-analyzer");
// Creative test approaches (Green Hat)
const ideas = await Task("Generate Test Ideas", {
feature: 'new-auth-system',
perspective: 'green-hat',
includeEmergingTechniques: true
}, "qe-test-generator");
// Comprehensive analysis (All Hats)
const analysis = await Task("Six Hats Analysis", {
topic: 'Q1 Test Strategy',
hats: ['white', 'black', 'yellow', 'green', 'red', 'blue']
}, "qe-quality-analyzer");
```
---
## Agent Coordination Hints
### Memory Namespace
```
aqe/six-hats/
βββ analyses/* - Complete hat analyses
βββ risks/* - Black hat findings
βββ opportunities/* - Yellow hat findings
βββ innovations/* - Green hat ideas
```
### Fleet Coordination
```typescript
const analysisFleet = await FleetManager.coordinate({
strategy: 'six-hats-analysis',
agents: [
'qe-quality-analyzer', // White + Blue hats
'qe-regression-risk-analyzer', // Black hat
'qe-test-generator' // Green hat
],
topology: 'parallel'
});
```
---
## Related Skills
- [risk-based-testing](../risk-based-testing/) - Black Hat deep dive
- [exploratory-testing-advanced](../exploratory-testing-advanced/) - Green Hat exploration
- [context-driven-testing](../context-driven-testing/) - Adapt to context
---
## Anti-Patterns
| β Avoid | Why | β
Instead |
|----------|-----|-----------|
| Mixing hats | Confuses thinking | One hat at a time |
| Justifying Red Hat | Kills intuition | State feelings only |
| Skipping hats | Misses insights | Use all six |
| Rushing | Shallow analysis | 5 min minimum per hat |
---
## Remember
**Separate thinking modes for clarity.** Each hat reveals different insights. Red Hat intuition often catches what Black Hat analysis misses.
**Everyone wears all hats.** This is parallel thinking, not role-based. The goal is comprehensive analysis, not debate.
This skill applies Edward de Bono's Six Thinking Hats methodology to software testing to produce structured, balanced quality assessments and actionable test plans. It guides testers through focused hat rotationsβWhite (facts), Red (feelings), Black (risks), Yellow (benefits), Green (creativity), and Blue (process)βso analyses are comprehensive and repeatable. Use it to design strategies, run retrospectives, analyze failures, or facilitate testing discussions.
Define a clear testing focus, then run each hat sequentially (recommended 5β7 minutes per hat) while documenting outputs. The White hat captures metrics and coverage; Red captures team intuition; Black lists risks and gaps; Yellow highlights strengths and quick wins; Green generates creative test approaches; Blue synthesizes prioritized actions with owners and timelines. Combine single-person templates for quick runs or team sessions for deeper alignment.
How long should a session take?
Solo sessions can be 30 minutes; team sessions typically run 60 minutes with more time per hat.
Can one person run all hats?
Yes. A solo practitioner can run the rotation, but team sessions improve diversity of perspectives and buy-in.