home / skills / refoundai / lenny-skills / building-with-llms

building-with-llms skill

/skills/building-with-llms

This skill guides you in building effective AI applications with practical prompts, architecture, and evaluation techniques for LLM projects.

npx playbooks add skill refoundai/lenny-skills --skill building-with-llms

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
6.0 KB
---
name: building-with-llms
description: Help users build effective AI applications. Use when someone is building with LLMs, writing prompts, designing AI features, implementing RAG, creating agents, running evals, or trying to improve AI output quality.
---

# Building with LLMs

Help the user build effective AI applications using practical techniques from 60 product leaders and AI practitioners.

## How to Help

When the user asks for help building with LLMs:

1. **Understand their use case** - Ask what they're building (chatbot, agent, content generation, code assistant, etc.)
2. **Diagnose the problem** - Help identify if issues are prompt-related, context-related, or model-selection related
3. **Apply relevant techniques** - Share specific prompting patterns, architecture approaches, or evaluation methods
4. **Challenge common mistakes** - Push back on over-reliance on vibes, skipping evals, or using the wrong model for the task

## Core Principles

### Prompting

**Few-shot examples beat descriptions**
Sander Schulhoff: "If there's one technique I'd recommend, it's few-shot prompting—giving examples of what you want. Instead of describing your writing style, paste a few previous emails and say 'write like this.'"

**Provide your point of view**
Wes Kao: "Sharing my POV makes output way better. Don't just ask 'What would you say?' Tell it: 'I want to say no, but I'd like to preserve the relationship. Here's what I'd ideally do...'"

**Use decomposition for complex tasks**
Sander Schulhoff: "Ask 'What subproblems need solving first?' Get the list, solve each one, then synthesize. Don't ask the model to solve everything at once."

**Self-criticism improves output**
Sander Schulhoff: "Ask the LLM to check and critique its own response, then improve it. Models can catch their own errors when prompted to look."

**Roles help style, not accuracy**
Sander Schulhoff: "Roles like 'Act as a professor' don't help accuracy tasks. But they're great for controlling tone and style in creative work."

**Put context at the beginning**
Sander Schulhoff: "Place long context at the start of your prompt. It gets cached (cheaper), and the model won't forget its task when processing."

### Architecture

**Context engineering > prompt engineering**
Bret Taylor: "If a model makes a bad decision, it's usually lack of context. Fix it at the root—feed better data via MCP or RAG."

**RAG quality = data prep quality**
Chip Huyen: "The biggest gains come from data preparation, not vector database choice. Rewrite source data into Q&A format. Add annotations for context humans take for granted."

**Layer models for robustness**
Bret Taylor: "Having AI supervise AI is effective. Layer cognitive steps—one model generates, another reviews. This moves you from 90% to 99% accuracy."

**Use specialized models for specialized tasks**
Amjad Masad: "We use Claude Sonnet for coding, other models for critiquing. A 'society of models' with different roles outperforms one general model."

**200ms is the latency threshold**
Ryan J. Salva (GitHub Copilot): "The sweet spot for real-time suggestions is ~200ms. Slower feels like an interruption. Design your architecture around this constraint."

### Evaluation

**Evals are mandatory, not optional**
Kevin Weil (OpenAI): "Writing evals is becoming a core product skill. A 60% reliable model needs different UX than 95% or 99.5%. You can't design without knowing your accuracy."

**Binary scores > Likert scales**
Hamel Husain: "Force Pass/Fail, not 1-5 scores. Scales produce meaningless averages like '3.7'. Binary forces real decisions."

**Start with vibes, evolve to evals**
Howie Liu: "For novel products, start with open-ended vibes testing. Only move to formal evals once use cases converge."

**Validate your LLM judge**
Hamel Husain: "If using LLM-as-judge, you must eval the eval. Measure agreement with human experts. Iterate until it aligns."

### Building & Iteration

**Retry failures—models are stochastic**
Benjamin Mann (Anthropic): "If it fails, try the exact same prompt again. Success rates are much higher on retry than on banging on a broken approach."

**Be ambitious in your asks**
Benjamin Mann: "The difference between effective and ineffective Claude Code users: ambitious requests. Ask for the big change, not incremental tweaks."

**Cross-pollinate between models**
Guillermo Rauch: "When stuck after 100+ iterations, copy the code to a different model (e.g., from v0 to ChatGPT o1). Fresh perspective unblocks you."

**Compounding engineering**
Dan Shipper: "For every unit of work, make the next unit easier. Save prompts that work. Build a library. Your team's AI effectiveness compounds."

### Working with AI Tools

**Learn to read and debug, not memorize syntax**
Amjad Masad: "The ROI on coding doubles every 6 months because AI amplifies it. Focus on reading code and debugging—syntax is handled."

**Use chat mode to understand**
Anton Osika: "Use 'chat mode' to ask the AI to explain its logic. 'Why did you do this? What am I missing?' Treat it as a tutor."

**Vibe coding is a real skill**
Elena Verna: "I put vibe coding on my resume. Build functional prototypes with natural language before handing to engineering."

## Questions to Help Users

- "What are you building and what's the core user problem?"
- "What does the model get wrong most often?"
- "Are you measuring success systematically or going on vibes?"
- "What context does the model have access to?"
- "Have you tried few-shot examples?"
- "What happens when you retry failed prompts?"

## Common Mistakes to Flag

- **Vibes forever** - Eventually you need real evals, not just "it feels good"
- **Prompt-only thinking** - Often the fix is better context, not better prompts
- **One model for everything** - Different models excel at different tasks
- **Giving up after one failure** - Stochastic systems need retries
- **Skipping the human review** - AI output needs human validation, especially early on

## Deep Dive

For all 110 insights from 60 guests, see `references/guest-insights.md`

## Related Skills

- AI Product Strategy
- AI Evals
- Vibe Coding
- Evaluating New Technology

Overview

This skill helps product builders design, evaluate, and ship effective AI applications using practical techniques from experienced product leaders. It focuses on prompt design, context engineering, architecture patterns, evaluation strategies, and iterative workflows that improve output quality. Use it to move from vague "vibes" to measurable, reliable AI features.

How this skill works

I ask targeted questions about your use case, failure modes, and evaluation practices to diagnose whether issues are prompt-, context-, or model-related. I then recommend concrete techniques: few-shot prompts, decomposition, RAG/data prep, layered model checks, retries, and evaluation design. I also flag common mistakes and suggest architecture or tooling changes to raise reliability and UX. Finally, I propose measurable evals and iteration plans to track progress.

When to use it

  • Designing a chatbot, agent, or assistant behavior
  • Implementing RAG or choosing context sources
  • Writing prompts or templates for users or systems
  • Improving model reliability and reducing hallucinations
  • Setting up evals and pass/fail metrics

Best practices

  • Start by clarifying the core user problem and success metric before iterating on prompts
  • Favor context engineering (better data, RAG) over chasing prompt tricks
  • Use few-shot examples and explicit POVs to control style and behavior
  • Decompose complex tasks into subproblems and synthesize results
  • Layer models: one generates, another critiques or validates for higher accuracy
  • Build binary pass/fail evals and validate any LLM-as-judge against human labels

Example use cases

  • Create a customer-support agent with high factual accuracy using RAG and data Q&A rewriting
  • Design a code-assistant flow: Claude Sonnet for generation plus another model for critique
  • Write prompt libraries with few-shot examples for consistent tone across emails or content
  • Implement an eval suite with binary checks and retry logic to reach target reliability
  • Prototype an agent artifice: ask an LLM to self-critique and iterate outputs automatically

FAQ

How do I know whether to fix prompts or context?

If errors are factual or missing info, fix context (add RAG or more source data). If tone or format is wrong, adjust prompting or few-shot examples.

When should I build formal evals?

Start informal vibe testing during exploration. Move to binary, automated evals once use cases stabilize or when making UX/launch decisions.