home / skills / refoundai / lenny-skills / conducting-interviews

conducting-interviews skill

/skills/conducting-interviews

This skill helps you design and conduct behavioral-based hiring interviews, guiding structure, questioning, and evaluation to reveal true candidate capability.

npx playbooks add skill refoundai/lenny-skills --skill conducting-interviews

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
4.2 KB
---
name: conducting-interviews
description: Help users conduct effective hiring interviews. Use when someone is designing an interview loop, crafting interview questions, evaluating candidates in real-time, or building a structured interview process.
---

# Conducting Interviews

Help the user conduct effective hiring interviews using frameworks from 75 product leaders who have interviewed thousands of candidates at top companies.

## How to Help

When the user asks for help with conducting interviews:

1. **Understand the role** - Ask what position they're hiring for and what competencies matter most
2. **Design the structure** - Help create a consistent, behavioral-based interview process
3. **Craft the questions** - Suggest questions that reveal actual capability, not rehearsed answers
4. **Evaluate effectively** - Guide them on separating signal from noise and avoiding common biases

## Core Principles

### Use behavioral-based interviewing
Bill Carr: "We created a set of objective criteria that would be used and an interview methodology that would be used in every interview, which was the objective criteria would be our leadership principles, and the methodology would be behavioral based interviewing." Ask for specific past examples, not hypotheticals.

### Look past polished delivery
Jackie Bavaro: "Some people sounded really good because they'd say, 'Well, I'll tell you three things. Number one, number two, number three.' And then when I paid attention to my notes, I'd be like, 'Wait, their three ideas weren't actually good ideas.'" Evaluate substance over structure.

### Drill six levels deep
Joe Hudson (on Elon's approach): "You ask them six levels down. You improved sales. How did you do that, exactly? Well, we improved the pipeline. How'd you do that, exactly?" True expertise is revealed by drilling into the technical and process-oriented 'how'.

### Ask how they prepared
Austin Hay: "I like to ask people how they prepared for the interview. You're really asking how does the person think? How did they plan? How did they take things seriously or not?" Preparation style reveals planning depth and systems thinking.

### End with 'anything else?'
Christopher Lochhead: "At the very end you say, 'Hey, Susan, before we wrap, is there anything else?' And often, the most important thing for that person to communicate comes out then." The formal structure ending unlocks authenticity.

### Test failure and learning
Annie Pearl: "Talk me through your biggest product flop. What happened and what did you do about it?... The rawer the answer in terms of how bad it was and why, the better." Look for brutal honesty and genuine learning.

### Simulate working together
Noam Lovinsky: "I generally like interview questions that allow us to kind of do some work together... getting into the details and really watching each other exercise our craft is really important." Collaborative exercises reveal true capability.

### Use the PEARL framework
Jackie Bavaro: "Problem, Epiphany, Action, Result and Learning. What's the problem that you thought was worth solving? What's your epiphany? What's the insight that you had?" This structure ensures candidates demonstrate unique insight, not just activity.

## Questions to Help Users

- "What competencies are most critical for this specific role?"
- "Are you testing for skills that can be rehearsed or genuine capability?"
- "How will you distinguish between confident delivery and quality thinking?"
- "What signals true ownership versus 'we' statements that hide contribution?"
- "How are you calibrating across multiple interviewers?"

## Common Mistakes to Flag

- **Performative interviews** - Rewarding rehearsed STAR responses over actual capability
- **Not probing deeply enough** - Accepting surface answers without drilling into specifics
- **High-volume fatigue** - Scheduling back-to-back interviews that degrade judgment
- **Hypothetical questions** - Testing what candidates say they would do instead of what they have done
- **Skipping the 'failure' question** - Missing the chance to test self-awareness and growth mindset

## Deep Dive

For all 91 insights from 75 guests, see `references/guest-insights.md`

## Related Skills

- Writing Job Descriptions
- Evaluating Candidates
- Onboarding New Hires
- Building Team Culture

Overview

This skill helps interviewers design and run effective hiring interviews rooted in behavioral frameworks and real-world practices from experienced product leaders. It focuses on creating consistent interview loops, crafting questions that reveal true capability, and improving evaluation accuracy. Use it to reduce bias, surface ownership, and predict on-the-job performance.

How this skill works

I ask about the role and the most critical competencies, then co-design a structured interview loop with clear objectives, timeboxes, and evaluation criteria. I suggest behavioral and simulation-style questions, probing heuristics (e.g., drill six levels deep), and a rubric to separate signal from polished delivery. I also flag common mistakes and provide calibration guidance for multiple interviewers.

When to use it

  • Designing an interview loop for a new role or team
  • Crafting interview questions to test real-world ownership and skill
  • Calibrating multiple interviewers to the same evaluation standards
  • Evaluating candidates in real time and avoiding common biases
  • Turning interview outcomes into consistent hiring decisions

Best practices

  • Define 3–5 core competencies for the role and map each interview to one or two competencies
  • Use behavioral prompts (past examples) and drill multiple levels into the how and why
  • Prefer collaborative simulations or take-home tasks over hypotheticals when possible
  • Score against objective criteria and capture evidence for each rating
  • Always ask about failures and learning, and end interviews with 'anything else?'

Example use cases

  • Create a 4-interview loop for a senior PM with role-specific rubrics and timeboxes
  • Write 8 behavioral and simulation questions that reveal ownership and trade-off thinking
  • Train hiring panel on probing techniques and how to avoid rewarding rehearsed answers
  • Audit an existing interview process to remove hypotheticals and add failure-focused prompts
  • Build a calibration checklist for post-interview debriefs to align expectations

FAQ

How do I stop rewarding rehearsed STAR answers?

Ask follow-ups that demand depth: drill into specifics, request artifacts or metrics, and test for decisions made versus team outcomes.

When should I use a simulation or take-home exercise?

Use them when you need to observe work output or collaboration skills; keep scope realistic and timeboxed to avoid bias toward candidates with more free time.

How do we calibrate ratings across interviewers?

Use shared rubrics with example answers, run calibration sessions with sample transcripts, and require evidence notes to justify each score.