home / skills / mattgierhart / prd-driven-context-engineering / prd-v03-outcome-definition

prd-v03-outcome-definition skill

/.claude/skills/prd-v03-outcome-definition

This skill defines measurable KPIs for product types, sets targets, and links metrics to go/no-go decisions and downstream processes.

npx playbooks add skill mattgierhart/prd-driven-context-engineering --skill prd-v03-outcome-definition

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
5.3 KB
---
name: prd-v03-outcome-definition
description: Define measurable success metrics (KPIs) tied to product type during PRD v0.3 Commercial Model. Triggers on requests to define success metrics, set KPI targets, determine what to measure, establish go/no-go thresholds, or when user asks "how do we measure success?", "what metrics matter?", "what's our target?", "how do we know if this works?", "define KPIs", "success criteria". Consumes Product Type Classification (BR-) from v0.2. Outputs KPI- entries with thresholds, evidence sources, and downstream gate linkages.
---

# Outcome Definition

Position in HORIZON workflow: v0.2 Product Type Classification → **v0.3 Outcome Definition** → v0.3 Pricing Model Selection

## Metric Quality Hierarchy

Not all metrics are equal. Use this tier system:

| Tier | Metric Types | Why It Matters |
|------|--------------|----------------|
| **Tier 1** | Revenue (MRR, first dollar, ACV), Churn (logo, NRR), LTV:CAC | Revenue validates market fit. "First dollar IS the proof." |
| **Tier 2** | Conversion rates (trial→paid, lead→customer), Time to Value, Activation | Leading indicators that predict Tier 1 outcomes |
| **Tier 3** | Engagement (DAU, sessions), Feature adoption, NPS | "Nice to know" — only track if tied to Tier 1/2 |

**Rule**: Every product needs at least one Tier 1 metric. Tier 3 metrics without Tier 1/2 correlation are vanity metrics.

## Product Type × Metric Selection

Metrics must align with product type from v0.2 classification:

| Product Type | Primary Metrics | Anti-Metrics (Avoid) |
|--------------|-----------------|----------------------|
| **Clone** | Feature parity score, Price delta vs. leader, TTFV vs. leader | Generic engagement (doesn't prove you beat leader) |
| **Undercut** | Price per [unit] vs. leader, Niche conversion rate, CAC in target segment | Broad market share (you're niche by design) |
| **Unbundle** | Category NPS vs. platform, Vertical retention, Feature depth usage | Platform-level metrics (irrelevant to your slice) |
| **Slice** | Marketplace ranking, Install→activate rate, Platform retention lift | TAM metrics (platform owns the market) |
| **Wrapper** | Time saved per workflow, API reliability, Integration adoption | Standalone usage (value is in connection) |
| **Innovation** | Education→activation conversion, Behavioral change rate, Reference customers | User counts without activation (people try, don't convert) |

## Leading vs. Lagging Framework

Every product needs BOTH:

**Leading Indicators** (actionable now, predict outcomes):
- Sequences sent, open rates, trial starts
- Time to first value, activation rate
- Feature adoption in first 7 days

**Lagging Indicators** (confirm strategy worked):
- MRR, churn rate, LTV:CAC
- Net Revenue Retention (NRR)
- Customer count, logo churn

**Pattern**: Track leading weekly, lagging monthly. If leading indicators fail, you can pivot before lagging indicators confirm disaster.

## Target-Setting Rules

Targets must be evidence-based, never arbitrary:

**Good targets** (use these approaches):
- Competitor benchmark × safety margin: "SMB churn benchmark 3-5% → use 5%"
- Revenue gates: "First dollar by Day 14" (Signal → $1: 14 days)
- Ratio thresholds: "LTV:CAC ≥ 3:1"
- Time bounds: "TTFV < 5 minutes for self-serve"

**Bad targets** (anti-patterns):
- Round numbers without evidence: "10% improvement"
- Engagement without revenue tie: "1000 DAU"
- Aspirational without baseline: "Best in class retention"

## Output Template

Create KPI- entries in this format:

```
KPI-XXX: [Metric Name]
Type: [Tier 1 | Tier 2 | Tier 3]
Category: [Leading | Lagging]
Definition: [Exact calculation formula]
Target: [Specific threshold with evidence source]
Evidence: [CFD-XXX or benchmark source]
Downstream Gate: [Which decision uses this — e.g., "v0.5 Red Team kill criteria"]
Measurement: [How/when measured — e.g., "Weekly via Mixpanel"]
```

**Example KPI- entry:**
```
KPI-001: Time to First Revenue
Type: Tier 1
Category: Lagging
Definition: Days from market signal identification to first paying customer
Target: ≤14 days (GearHeart standard: Signal → $1: 14 days)
Evidence: BR-001 (GearHeart methodology)
Downstream Gate: v0.5 Red Team — if not hit by Day 21, evaluate pivot
Measurement: Manual tracking in PRD changelog
```

## Anti-Patterns to Avoid

1. **Vanity metrics as primary**: "50K users" means nothing if only 500 pay
2. **Traffic without quality**: High volume + low engagement = quality problem
3. **Arbitrary targets**: "10% improvement" without baseline or benchmark
4. **All lagging, no leading**: Can't course-correct if you only see outcomes monthly
5. **Ignoring product type**: Clone metrics ≠ Innovation metrics
6. **Unmeasurable outcomes**: "Better experience" — how do you know?

## Downstream Connections

KPI- entries feed into:

| Consumer | What It Uses | Example |
|----------|--------------|---------|
| **v0.5 Red Team** | Kill thresholds | "If KPI-001 not hit by Day 21, pivot" |
| **v0.7 Build Execution** | EPIC acceptance criteria | "EPIC complete when KPI-002 validated" |
| **v0.9 GTM** | Launch dashboard | Track KPI-001, KPI-003 post-launch |
| **BR- Business Rules** | Derived constraints | "BR-XXX: No launch if LTV:CAC <3:1" |

## Detailed References

- **Good/bad examples**: See `references/examples.md`
- **Benchmark sources**: See `references/benchmarks.md`
- **KPI template worksheet**: See `assets/kpi.md`

Overview

This skill defines measurable success metrics (KPIs) tied to a product type during PRD v0.3 Commercial Model. It translates the v0.2 Product Type Classification into prioritized Tier 1–3 metrics, sets evidence-based targets, and links each KPI to downstream gates. Outputs are KPI- entries with definitions, targets, evidence sources, measurement cadence, and decision linkages.

How this skill works

On request, the skill consumes the Product Type Classification from v0.2 and selects metrics aligned to that type using the Metric Quality Hierarchy. It recommends both leading and lagging indicators, proposes evidence-based targets (benchmarks, competitor data, or internal baselines), and formats each result as KPI- entries with thresholds, evidence sources, measurement method, and downstream gate linkages. It flags anti-patterns and ensures at least one Tier 1 metric is present.

When to use it

  • Defining success criteria during PRD v0.3 Commercial Model
  • When asked “how do we measure success?” or “what metrics matter?”
  • Setting KPI targets and go/no-go thresholds for a product initiative
  • Translating product type classification into measurable outcomes
  • Preparing KPI inputs for v0.5 Red Team or v0.9 GTM dashboards

Best practices

  • Always include at least one Tier 1 (revenue/churn/LTV:CAC) metric
  • Pair leading indicators (weekly) with lagging indicators (monthly) for early course correction
  • Set targets from evidence: benchmarks, competitor data, or empirical baselines—avoid arbitrary round numbers
  • Map each KPI to a downstream decision gate (e.g., v0.5 Red Team kill criteria)
  • Avoid Tier 3 vanity metrics unless correlated to Tier 1/2 outcomes

Example use cases

  • Clone product: produce KPI- entries for feature parity, time-to-first-value vs leader, and price delta targets
  • Undercut strategy: define price-per-unit targets, niche conversion rate, and CAC thresholds tied to target segment
  • Wrapper/integration: set KPI for time saved per workflow, integration adoption rate, and API reliability with measurement methods
  • Innovation play: define education→activation conversion, behavioral change rate, and reference-customer targets for early validation
  • GTM prep: export KPI-entries into launch dashboard and v0.5 Red Team kill rules

FAQ

What if we lack competitor benchmarks to set targets?

Use internal baselines or conservative industry proxies, document the evidence source as internal CFD/BR reference, and set short evaluation windows to iterate.

Can we use engagement metrics as primary KPIs?

Only if they correlate to Tier 1/2 outcomes. Engagement alone is often a vanity metric—ensure a clear causal path to revenue or retention.