home / skills / wdavidturner / product-skills / thinking-in-bets

thinking-in-bets skill

/skills/thinking-in-bets

This skill helps you make better decisions under uncertainty by framing bets, separating process from outcomes, and reducing bias.

npx playbooks add skill wdavidturner/product-skills --skill thinking-in-bets

Review the files below or copy the command above to add this skill to your agents.

Files (17)
SKILL.md
4.8 KB
---
name: thinking-in-bets
description: Use when asked to "thinking in bets", "make decisions under uncertainty", "think probabilistically", "avoid resulting", "separate decision quality from outcomes", or "reduce bias in decisions". Helps make explicit bets and evaluate decisions on process, not results. The Thinking in Bets framework (from Annie Duke) applies poker strategy to business and life decisions.
---

# Thinking in Bets

## What It Is

Thinking in Bets is a framework for improving decision quality by separating **decisions from outcomes**. The core insight: **a good decision can have a bad outcome, and a bad decision can have a good outcome—luck is always involved.**

Most people judge decisions by their outcomes (called "resulting"). This is backwards. You can only control the quality of your decision, not the outcome. Annie Duke, a professional poker player turned decision strategist, built this framework from poker, where you're forced to make decisions with incomplete information under uncertainty—exactly like business.

The key shifts:
- Move from "Was I right?" to "Was my thinking process good?"
- Move from "What happened?" to "What did I know at the time?"
- Move from implicit assumptions to explicit, testable beliefs

## When to Use It

Use Thinking in Bets when you need to:

- **Evaluate past decisions** without outcome bias clouding judgment
- **Make decisions under uncertainty** where luck will influence results
- **Improve team decision-making** in meetings and planning
- **Set up pre-mortems and kill criteria** for projects
- **Shorten feedback loops** on decisions with delayed outcomes
- **Reduce cognitive biases** like overconfidence, hindsight bias, and sunk cost
- **Run better meetings** that surface true opinions, not groupthink

## When Not to Use It

- The decision is trivial with low stakes
- You have perfect information (rare)
- You're looking for permission to take a risk you've already decided on

## Patterns

Detailed examples showing how to apply Thinking in Bets correctly. Each pattern shows a common mistake and the correct approach.

### Critical (get these wrong and you've wasted your time)

| Pattern | What It Teaches |
|---------|-----------------|
| [resulting](patterns/resulting.md) | Don't judge decisions by outcomes—judge by the process |
| [implicit-vs-explicit](patterns/implicit-vs-explicit.md) | Make intuitions explicit so you can test and improve them |
| [discover-discuss-decide](patterns/discover-discuss-decide.md) | Separate discovery (async), discussion (meetings), and decisions |
| [premortems-without-kill-criteria](patterns/premortems-without-kill-criteria.md) | Pre-mortems are useless without committed actions |

### High Impact

| Pattern | What It Teaches |
|---------|-----------------|
| [overconfidence](patterns/overconfidence.md) | Use ranges, not point estimates—you know less than you think |
| [hindsight-bias](patterns/hindsight-bias.md) | What seems obvious now wasn't obvious then |
| [sunk-cost](patterns/sunk-cost.md) | Past investment is irrelevant to future decisions |
| [anchoring-in-groups](patterns/anchoring-in-groups.md) | First opinions contaminate everyone else's judgment |
| [long-feedback-loops](patterns/long-feedback-loops.md) | No feedback loop is actually long—find intermediate signals |
| [seeking-alignment](patterns/seeking-alignment.md) | Stop seeking agreement—it's coercive and unrealistic |

### Medium Impact

| Pattern | What It Teaches |
|---------|-----------------|
| [confirmation-bias](patterns/confirmation-bias.md) | Seek out information that proves you wrong |
| [mental-time-travel](patterns/mental-time-travel.md) | Ask: "How will I feel about this in 10 years?" |
| [forecasting-ranges](patterns/forecasting-ranges.md) | Replace certainty with calibrated probability ranges |
| [nevertheless-leadership](patterns/nevertheless-leadership.md) | Hear everyone, then decide—"nevertheless" is your friend |


## Deep Dives

Read only when you need extra detail.

- `references/thinking-in-bets-playbook.md`: Expanded framework detail, checklists, and examples.

## Resources

**Books:**
- *Thinking in Bets* by Annie Duke — the core framework
- *Quit: The Power of Knowing When to Walk Away* by Annie Duke — when to stop
- *Thinking, Fast and Slow* by Daniel Kahneman — the psychology underneath

**Related:**
- *Superforecasting* by Philip Tetlock — calibrated probabilistic thinking
- *The Scout Mindset* by Julia Galef — seeking truth over confirmation
- *Algorithms to Live By* by Brian Christian — decision theory made practical

**Annie Duke:**
- Substack: "Thinking in Bets"
- Course on Maven: Effective Decision Making
- Alliance for Decision Education (co-founded)

---

*Framework based on Annie Duke's work and her conversation with Lenny Rachitsky on the Lenny's Podcast.*

Overview

This skill teaches the Thinking in Bets framework to improve decision quality under uncertainty by separating decision process from outcomes. It helps teams and individuals make probabilistic judgments, surface hidden assumptions, and evaluate choices based on process rather than luck. Use it to build repeatable decision habits that reduce bias and improve learning.

How this skill works

The skill guides you to turn decisions into explicit bets: state your beliefs, assign probabilities or ranges, and document the information you had at the time. It provides patterns and checklists to avoid common errors like resulting, overconfidence, and hindsight bias, and to run structured pre-mortems, post-mortems, and calibration exercises. Results are evaluated by whether the decision process was sound given available evidence, not by eventual outcomes.

When to use it

  • Evaluating past projects where outcome bias might hide process errors
  • Making high-impact choices with uncertain outcomes (product launches, hires, pivots)
  • Running meetings to surface true opinions and avoid anchoring or groupthink
  • Designing experiments or milestones with explicit kill criteria
  • Shortening feedback loops for decisions that have delayed results

Best practices

  • Convert key assumptions into explicit, testable bets with probability ranges
  • Run pre-mortems that conclude with concrete kill criteria and contingency triggers
  • Separate discovery (async), discussion (synchronous), and decision ownership
  • Use ranges and calibration exercises to combat overconfidence
  • Judge decisions by the information available at the time, not by luck-driven outcomes

Example use cases

  • A PM documents probabilistic forecasts for a new feature and revisits them after launch to calibrate forecasting skills
  • A leadership team runs a pre-mortem, lists failure modes, and commits to stop conditions to avoid sunk-cost escalation
  • A hiring panel uses anonymous probability estimates to reduce anchoring and surface dissenting views
  • A product team replaces binary trade discussions with explicit bets and intermediate milestones to detect early signals
  • A manager coaches a direct report to separate evaluation of their decision process from a single bad result

FAQ

Is Thinking in Bets useful for low-stakes decisions?

Not usually; apply the framework where outcome noise matters and the decision quality is worth improving.

How do I avoid turning bets into excuses after a bad outcome?

Record the information and probabilities you had, then critique the process and update your priors—hold the decision process accountable even when luck intervenes.