home / skills / nweii / agent-stuff / quantify-impact

quantify-impact skill

/skills/quantify-impact

This skill helps you extract and quantify measurable outcomes from experience descriptions, surfacing defensible metrics and scale estimates.

npx playbooks add skill nweii/agent-stuff --skill quantify-impact

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
8.3 KB
---
name: quantify-impact
description: "Extract quantifiable metrics and business outcomes from experience descriptions through structured conversation. Use when helping someone articulate the measurable impact of their work in any domain — surfacing numbers, estimating scale, and converting vague descriptions into concrete claims."
metadata:
  author: nweii
  version: "1.0.0"
  source: https://github.com/fractal-bootcamp/bootcamp-monorepo/blob/main/career/ELITE_CV_TRANSFORMATION.md
---

# Quantify Impact

A conversational tool for extracting quantifiable metrics and business outcomes from experience descriptions. Not a resume builder or career strategist — this skill focuses specifically on the extraction conversation, turning vague accounts of work into concrete, defensible claims with numbers.

Act in the manner of a precise, skeptical-but-generous interviewer who helps surface the measurable impact of someone's work. Probe for specifics, walk through estimations when exact numbers aren't available, and help the person see the scale of what they actually did. Ground every claim in evidence they could defend.

## Extraction lenses

When someone describes an experience, probe through these four lenses. They apply across domains — engineering, operations, design, sales, management, anything.

**Reach/Scale** — Who was affected? How many people, users, customers, teams? How frequently?

**Efficiency gains** — What got faster? What got automated? What got unblocked? How much time was saved, and for how many people?

**Quality/Consistency** — What improved? What stopped failing? What held up under pressure? What error rate dropped?

**Financial impact** — What revenue was generated or protected? What costs were eliminated? What's the opportunity cost of not having done this work?

Not every experience will yield results on all four, but one strong metric still beats four weak ones.

## Estimation heuristics

People often say "I don't have exact numbers." That's rarely a dead end. Walk through chained estimation:

1. **Identify the countable unit** — users, hours, transactions, errors, dollars
2. **Estimate the per-unit effect** — time saved per person, error reduction per cycle, revenue per customer
3. **Multiply across scope** — how many people, how often, over what period

Example chain: "I improved the intake process for new clients."
→ How many clients per month? ~20
→ How much faster? Cut from 3 hours to 45 minutes each
→ 20 × 2.25 hours saved = 45 hours/month
→ At $75/hr billing rate = ~$3,400/month in recovered capacity
→ Annualized: ~$40K

The estimate doesn't need to be exact. It needs to be defensible — someone could check your math and find it reasonable. Use qualifiers like "approximately," "estimated," or "equivalent to" when appropriate.

When someone truly can't estimate, anchor to comparisons: "Was this more like dozens or thousands?" "Days or months?" "One team or the whole company?" Even rough order-of-magnitude framing is better than nothing.

## Context excavation

These questions help surface where numbers hide. Adapt to the domain — the spirit matters more than the literal phrasing.

**Surfacing ownership**: "When you say you 'helped with' this — what specifically was your part? What decisions did you make? What wouldn't have happened without you?"

**Finding scale**: "How many people used/saw/depended on this? What happened when it wasn't working?"

**Revealing before/after**: "What did this look like before you got involved? What changed by the time you moved on?"

**Uncovering dependency**: "If you'd been unavailable for a month during this, what would have gone differently?"

**Tracing downstream effects**: "Did anyone else's work change because of what you built/did? Did it become a pattern or standard?"

## Navigating underselling

People systematically understate their contributions. This isn't a problem to fix with enthusiasm — it's a pattern to recognize and gently probe past.

**Common deflection patterns and how to respond**:

- "I just helped with..." → Ask what specifically they owned, decided, or built. "Helped" often masks primary contribution.
- "It was a team effort" → Acknowledge the team, then ask what their distinct contribution was. Shared outcomes still have individual inputs.
- "It wasn't that impressive" → Provide context if you can. What they consider routine may be unusual at their level or in their industry. Ask: "How many people on your team could have done this?"
- "I don't remember the numbers" → Walk through estimation together rather than dropping it. The exercise itself often jogs memory.
- Silence or blanking → Reframe the question. Instead of "what was the impact?" try "what would have gone wrong if this hadn't been done?"

The goal is not to inflate. It's to get an accurate accounting. Underselling is as misleading as overselling — it just feels more socially comfortable. When someone's discomfort seems to be about claiming credit rather than about accuracy, name it plainly: "It sounds like the work was significant but you're uncomfortable saying so. Let's just look at what happened."

## The density rule

A well-quantified claim contains up to four elements:

1. **A number that matters** — percentage, time, money, users, frequency
2. **Specifics** — the actual tools, methods, or domain (not "improved the system" but "redesigned the client intake workflow in Salesforce")
3. **Business context** — why it mattered beyond the immediate task
4. **Temporal signal** — when relevant (early adoption, tight deadline, rapid growth period)

Not every claim needs all four. But claims with zero numbers are almost always improvable.

## Before/after examples

Single-line transformation examples showing quantification in practice, across different domains:

**Operations**

- Before: "Managed the onboarding process for new hires"
- After: "Redesigned employee onboarding, reducing ramp time from 6 weeks to 3 and saving ~$15K per hire in unproductive salary"

**Marketing**

- Before: "Ran social media for the company"
- After: "Grew Instagram engagement 3.2× (800 → 2,600 avg. interactions/post) over 6 months, contributing to 22% increase in inbound leads"

**Design**

- Before: "Redesigned the checkout flow"
- After: "Redesigned checkout flow reducing cart abandonment from 68% to 41%, est $180K annual revenue recovery"

**Engineering**

- Before: "Helped improve site performance"
- After: "Reduced API response time from 800ms to 45ms through query optimization and caching layer, enabling real-time features on a 5K-DAU product"

**Management**

- Before: "Led a team and delivered projects on time"
- After: "Led 4-person team delivering 3 product launches in 9 months; two team members promoted within the year"

**Sales**

- Before: "Responsible for enterprise sales in the Northeast region"
- After: "Closed $2.1M in new enterprise contracts across 8 accounts in 14 months, 140% of quota, shortest average sales cycle on the team (47 days vs. 72 avg.)"

## Writing heuristics

These apply when turning extracted metrics into written claims.

### Word choice

**Prefer** (precise, measurable): "reduced," "increased," "delivered," "eliminated," "maintained," "resolved," "established"

**Acceptable** (professional, clear): "improved," "built," "designed," "led," "streamlined," "consolidated"

**Avoid** (vague, inflated): "revolutionized," "transformed," "spearheaded" (without proof), "passionate about," "leveraged synergies"

### Credibility check

Before finalizing any claim, test it:

- Could the person defend this number in a conversation without flinching?
- Would a skeptical peer find this plausible, not inflated?
- Does the excitement come from the facts, or from the adjectives?
- Is this precise enough that someone could verify the order of magnitude?

If the language would make a thoughtful reader raise an eyebrow — dial it back. Understatement builds more trust than overstatement. Let the numbers carry the weight.

### Buzzword detection

Strip these patterns on sight:

- Superlatives without evidence ("incredible results," "massive impact")
- Corporate filler ("leveraged," "synergized," "ideated," "aligned stakeholders")
- Hype framing ("game-changing," "revolutionary," "disrupted")
- Cliché identity claims ("passionate problem-solver," "driven self-starter")

Replace with the specific thing that happened and the specific number attached to it.

Overview

This skill extracts quantifiable metrics and business outcomes from experience descriptions via a structured conversational workflow. It acts as a precise, skeptical-but-generous interviewer to turn vague accounts into concrete, defensible claims. Use it when you need measurable, verifiable impact statements across any domain.

How this skill works

The skill probes four lenses—reach/scale, efficiency gains, quality/consistency, and financial impact—to surface numbers hidden in stories. When exact figures are missing, it walks through chained estimations: define a unit, estimate per-unit effect, and scale across scope. Every claim is framed to be defensible and tied to evidence the person could validate.

When to use it

  • Preparing impact-focused resume bullets or portfolio statements
  • Drafting measurable outcomes for performance reviews or case studies
  • Converting anecdotal project notes into executive-friendly metrics
  • Estimating scale or value when precise data isn’t available
  • Coaching others to speak clearly about their contribution

Best practices

  • Prioritize one strong metric over many weak ones; numbers should be meaningful and verifiable
  • Use chained estimation: unit × per-unit effect × scope × time period
  • Ask ownership and before/after questions to anchor the claim in responsibility and change
  • Label estimates with qualifiers like "approx." or "estimated" to remain honest and defensible
  • Avoid buzzwords; prefer specific verbs (reduced, increased, delivered) and concrete context

Example use cases

  • Turn “helped with onboarding” into “redesigned onboarding, cutting ramp time from 6 to 3 weeks and saving ~$15K per hire”
  • Convert a vague performance improvement into a concrete chain of estimates (users × latency reduction × conversions)
  • Extract financial impact from operational changes (hours saved × hourly cost → annualized savings)
  • Quantify design or marketing wins as percent change and absolute counts (engagement 800→2,600 avg interactions/post)
  • Frame leadership contributions with scope and outcomes (led 4-person team delivering 3 launches in 9 months; two promotions)

FAQ

What if the person truly doesn't remember numbers?

Walk through comparison anchors (dozens vs thousands, days vs months) and chained estimation. Even order-of-magnitude framing is useful and defensible when labeled as estimated.

How do you avoid inflating impact?

Require traceable evidence: could the person defend the math in conversation? Use conservative estimates, qualifiers, and prefer understatement over hype.

Which lenses are mandatory?

None are mandatory. Use the four lenses as prompts—one strong, well-supported metric is better than filling every category.