home / skills / refoundai / lenny-skills / ai-product-strategy

ai-product-strategy skill

/skills/ai-product-strategy

This skill helps you define an AI product strategy by guiding build vs buy, architecture, and iteration plans.

npx playbooks add skill refoundai/lenny-skills --skill ai-product-strategy

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
4.8 KB
---
name: ai-product-strategy
description: Help users define AI product strategy. Use when someone is building an AI product, deciding where to apply AI in their product, planning an AI roadmap, evaluating build vs buy for AI capabilities, or figuring out how to integrate AI into existing products.
---

# AI Product Strategy

Help the user make strategic decisions about AI products using frameworks from 94 product leaders and AI practitioners.

## How to Help

When the user asks for help with AI product strategy:

1. **Understand the context** - Ask what they're building, what problem they're solving, and where they are in the AI journey
2. **Clarify the problem** - Help distinguish between "AI for AI's sake" and genuine user problems that AI can solve
3. **Guide architecture decisions** - Help them think through build vs buy, model selection, and human-AI boundaries
4. **Plan for iteration** - Emphasize feedback loops, evals, and building for rapid model improvements

## Core Principles

### Start with the problem, not the AI
Aishwarya Naresh Reganti: "In all the advancements of AI, one slippery slope is to keep thinking about solution complexity and forget the problem you're trying to solve. Start with minimal impact use cases to gain a grip on current capabilities."

### Define the human-AI boundary
Adriel Frederick: "When working on algorithmic products, your job is figuring out what the algorithm should be responsible for, what people are responsible for, and the framework for making decisions." This boundary is the core PM decision.

### AI is magical duct tape
Alex Komoroske: "LLMs are magical duct tape—distilled intuition of society. They make writing 'good enough' software significantly cheaper but increase marginal inference costs." Understand the new cost structure.

### Build for the slope, not the snapshot
Asha Sharma: "You have to build for the slope instead of the snapshot of where you are." AI capabilities change fast—build flexible architectures that can swap models as they improve.

### Design for squishiness
Alex Komoroske: "Even at 99% accuracy, if it punches the user in the face 1% of the time, that's not a viable product. Design assuming the AI will be squishy and not fully accurate."

### Flywheels beat first-mover advantage
Aishwarya Naresh Reganti: "It's not about being first to have an agent. It's about building the right flywheels to improve over time." Log human actions to create data loops for system improvement.

### Society of models, not single models
Amjad Masad: "Future products will be made of many different models—it's quite a heavy engineering project." Use specialized models for different tasks (reasoning vs speed vs coding).

### Use the right tool for each task
Albert Cheng: "We run chess engines for evaluations. LLMs translate that into natural language. Use the right technology for the right task." Don't use LLMs where deterministic algorithms excel.

### Humans are the bottleneck
Alexander Embiricos: "The current limiting factor is human typing speed and multitasking on prompts. Build systems that are 'default useful' without constant prompting."

### Account for non-determinism
Aishwarya Naresh Reganti: "Most people ignore the non-determinism. You don't know how users will behave with natural language, and you don't know how the LLM will respond." Build for variability.

### Agents need autonomy + complexity + natural interaction
Aparna Chennapragada: "Effective agents have (1) increasing autonomy to handle higher-order tasks, (2) ability to handle complex multi-step workflows, and (3) natural, often asynchronous interaction."

### Rebuild your intuitions
Aishwarya Naresh Reganti: "Leaders have to get hands-on—not implementing, but rebuilding intuitions. Be comfortable that your intuitions might not be right." Block time daily to stay current.

## Questions to Help Users

- "What specific user problem are you solving with AI?"
- "What should the AI decide vs. what should humans decide?"
- "How will you handle the 5% of cases where the AI fails?"
- "What feedback loops will improve the system over time?"
- "Are you building for today's model capabilities or anticipating improvements?"
- "Have you set up evals and observability?"

## Common Mistakes to Flag

- **AI for AI's sake** - Adding AI features without clear user problems
- **Single-model thinking** - Not considering specialized models for different tasks
- **Ignoring the failures** - Not designing UX for when AI gets it wrong
- **Static architecture** - Building systems that can't evolve with model improvements
- **Skipping evals** - Not establishing measurement and observability from day one
- **Over-automation** - Removing humans from loops where they add value

## Deep Dive

For all 179 insights from 94 guests, see `references/guest-insights.md`

## Related Skills

- Building with LLMs
- AI Evals
- Evaluating New Technology
- Platform Strategy

Overview

This skill helps founders and product teams define pragmatic AI product strategy. It focuses on choosing where to apply AI, clarifying human-AI boundaries, and planning iterative roadmaps that tolerate model uncertainty. Use it to decide build vs. buy, prioritize features, and design feedback loops that improve models over time.

How this skill works

I start by clarifying the user problem, current product context, and maturity of your AI stack. Then I surface trade-offs: build vs buy, single-model vs multi-model architectures, and where to place humans in the loop. Finally, I help you design evals, observability, and iteration plans so your product improves with usage and new model capabilities.

When to use it

  • Deciding whether AI actually solves a clear user problem or is just hype
  • Choosing build vs buy for core capabilities like embeddings, fine-tuning, or tool use
  • Designing the human-AI boundary and failure-handling UX
  • Planning a roadmap that anticipates model improvements and swaps
  • Setting up evals, metrics, and data collection for model-driven flywheels

Best practices

  • Start with the problem, not the AI—validate user impact before heavy engineering
  • Define explicit human vs AI responsibilities and UX for failure cases
  • Build modular systems that let you swap or add specialized models over time
  • Instrument evals and observability from day one to measure regressions and gains
  • Design for non-determinism: expect variability and plan safe defaults

Example use cases

  • A SaaS product deciding whether to add an AI assistant for customer support and how to route confidence failures to humans
  • A startup evaluating if they should fine-tune a model or integrate a third-party API for summarization
  • A roadmap for migrating from a single LLM to a society of models (fast vs accurate vs domain-specific)
  • Designing data collection and feedback loops to create a model improvement flywheel

FAQ

How do I decide build vs buy for an AI feature?

Compare strategic differentiation, speed to market, cost of inference, and data needs. Buy to move fast; build when the capability is core IP or when you can collect unique, high-quality data for a flywheel.

How should we handle the 1–5% failure cases?

Design clear escalation paths: safe defaults, explicit user confirmations, easy human handoff, and tooling to capture those failure examples for retraining or rules.

When should we move from one model to many specialized models?

Introduce specialization when tasks diverge in latency, accuracy, or cost requirements. Start with a single model for iteration, then split responsibilities as needs and traffic justify the complexity.