home / skills / refoundai / lenny-skills / designing-surveys

designing-surveys skill

/skills/designing-surveys

This skill helps you design effective surveys by guiding goal clarification, metric selection, clean question design, and respondent targeting.

npx playbooks add skill refoundai/lenny-skills --skill designing-surveys

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
3.8 KB
---
name: designing-surveys
description: Help users design effective surveys. Use when someone is creating customer surveys, NPS measurements, product-market fit surveys, or feedback collection mechanisms.
---

# Designing Surveys

Help the user design effective surveys using frameworks from 9 product leaders who have built rigorous research and feedback systems.

## How to Help

When the user asks for help with surveys:

1. **Clarify the goal** - Determine if they're measuring satisfaction, identifying problems, or prioritizing features
2. **Choose the right metric** - Help them select between NPS, CSAT, PMF survey, or custom approaches
3. **Design clean questions** - Ensure each question measures one thing precisely
4. **Target the right respondents** - Help them reach users with fresh, relevant experience

## Core Principles

### NPS is scientifically flawed
Judd Antin: "NPS is the best example of the marketing industry marketing itself. The consensus in the survey science community is that NPS makes all the mistakes. Customer satisfaction, a simple CSAT metric, is better. It has better data properties, it is more precise, it is more correlated to business outcomes." Use CSAT with 5-7 item scales instead.

### Force prioritization with constraints
Nicole Forsgren: "Let them pick three, just three. Of those three, how often does this affect you? Is this hourly? Is this daily? Is this weekly?" Limit respondents to their top barriers to keep data clean, then measure frequency to weight impact.

### Survey your best customers at the right time
Gia Laudi: "Very importantly, they signed up for your product recently enough that they remember what life was like before. Generally, we say that's in the three to six-month range." Target customers who have been using the product 3-6 months so their memory of the 'before' state is fresh.

### Onboarding surveys improve conversion
Laura Schaffer: "We just asked for forgiveness and put these questions into the signup flow. An improved conversion by like 5%, just improved signups." Adding 'good friction' in the form of targeted questions can increase conversion by reassuring users they're in the right place.

### Avoid double-barreled questions
Nicole Forsgren: "You're asking four different questions there. If someone answers yes, was it the build? Was it the test? Was it slow or was it flaky?" Ensure each survey question only asks about one specific variable.

### Use MaxDiff for feature prioritization
Madhavan Ramanujam: "Identify the most important for you, and the least important. If you do this a few times, you will be able to prioritize the entire feature set in a relative fashion." MaxDiff (Most/Least) surveys are superior to simple ranking for identifying value drivers.

## Questions to Help Users

- "What specific decision will this survey inform?"
- "Are you asking about one thing per question, or multiple things?"
- "Who are your 'best' customers and when did they sign up?"
- "Are all scale options visible on mobile without scrolling?"
- "How will you force respondents to prioritize rather than rate everything high?"

## Common Mistakes to Flag

- **Double-barreled questions** - Asking about speed AND complexity in one question
- **Too many options** - Allowing respondents to select unlimited items instead of forcing prioritization
- **Wrong timing** - Surveying customers who are too new (no experience) or too old (forgot the 'before')
- **NPS worship** - Relying on a metric with known scientific flaws over simpler, better alternatives
- **Hidden scale options** - Mobile surveys where users can't see all options create response bias

## Deep Dive

For all 10 insights from 9 guests, see `references/guest-insights.md`

## Related Skills

- Writing North Star Metrics
- Defining Product Vision
- Prioritizing Roadmap
- Setting OKRs & Goals

Overview

This skill helps you design effective, actionable surveys for customers, product research, and feedback collection. It focuses on clear goals, selecting the right metric, tight question design, and reaching the right respondents at the right time. Use it to turn survey responses into decisions rather than noise.

How this skill works

I start by clarifying the decision the survey must inform, then recommend the appropriate metric (CSAT, PMF, MaxDiff, or a custom scale). I audit questions to remove double-barreled items, enforce single-variable questions, and suggest forced-priority formats when needed. Finally, I help define respondent targeting and timing to reduce recall bias and increase signal.

When to use it

  • Designing customer satisfaction or product-market fit surveys
  • Measuring onboarding or signup intent and improving conversion
  • Prioritizing feature requests and roadmap decisions
  • Building recurring feedback loops for product improvements
  • Validating hypotheses before committing engineering resources

Best practices

  • Start by naming the exact decision the survey will inform; design every question to serve that decision
  • Prefer CSAT or targeted scales over NPS for cleaner, more actionable data
  • Ask one thing per question; remove double-barreled phrasing
  • Force prioritization when you need ranking (limit selections or use MaxDiff)
  • Survey customers 3–6 months after signup for fresh before/after comparisons
  • Ensure all scale options are visible on mobile to avoid response bias

Example use cases

  • Create a 5-question onboarding survey embedded in signup to surface user intent and boost conversion
  • Run a MaxDiff study to rank potential features before Q2 roadmap planning
  • Design a CSAT pulse for recent users to track satisfaction trends and correlate with churn
  • Survey power users (3–6 months in) to identify top barriers and how often they occur for prioritization

FAQ

When is NPS appropriate?

Only when you need a single, high-level directional signal and accept its statistical limitations; prefer CSAT or targeted scales for precision and actionable correlation with outcomes.

How do I force prioritization without annoying respondents?

Limit choices (eg. pick your top 3), ask frequency/impact follow-ups, or use MaxDiff so choices feel quick and meaningful.