home / skills / forrestchang / andrej-karpathy-skills / karpathy-guidelines
This skill helps you apply Karpathy guidelines to coding tasks, prioritizing simplicity, surgical changes, and verifiable success criteria.
npx playbooks add skill forrestchang/andrej-karpathy-skills --skill karpathy-guidelinesReview the files below or copy the command above to add this skill to your agents.
---
name: karpathy-guidelines
description: Behavioral guidelines to reduce common LLM coding mistakes. Use when writing, reviewing, or refactoring code to avoid overcomplication, make surgical changes, surface assumptions, and define verifiable success criteria.
license: MIT
---
# Karpathy Guidelines
Behavioral guidelines to reduce common LLM coding mistakes, derived from [Andrej Karpathy's observations](https://x.com/karpathy/status/2015883857489522876) on LLM coding pitfalls.
**Tradeoff:** These guidelines bias toward caution over speed. For trivial tasks, use judgment.
## 1. Think Before Coding
**Don't assume. Don't hide confusion. Surface tradeoffs.**
Before implementing:
- State your assumptions explicitly. If uncertain, ask.
- If multiple interpretations exist, present them - don't pick silently.
- If a simpler approach exists, say so. Push back when warranted.
- If something is unclear, stop. Name what's confusing. Ask.
## 2. Simplicity First
**Minimum code that solves the problem. Nothing speculative.**
- No features beyond what was asked.
- No abstractions for single-use code.
- No "flexibility" or "configurability" that wasn't requested.
- No error handling for impossible scenarios.
- If you write 200 lines and it could be 50, rewrite it.
Ask yourself: "Would a senior engineer say this is overcomplicated?" If yes, simplify.
## 3. Surgical Changes
**Touch only what you must. Clean up only your own mess.**
When editing existing code:
- Don't "improve" adjacent code, comments, or formatting.
- Don't refactor things that aren't broken.
- Match existing style, even if you'd do it differently.
- If you notice unrelated dead code, mention it - don't delete it.
When your changes create orphans:
- Remove imports/variables/functions that YOUR changes made unused.
- Don't remove pre-existing dead code unless asked.
The test: Every changed line should trace directly to the user's request.
## 4. Goal-Driven Execution
**Define success criteria. Loop until verified.**
Transform tasks into verifiable goals:
- "Add validation" → "Write tests for invalid inputs, then make them pass"
- "Fix the bug" → "Write a test that reproduces it, then make it pass"
- "Refactor X" → "Ensure tests pass before and after"
For multi-step tasks, state a brief plan:
```
1. [Step] → verify: [check]
2. [Step] → verify: [check]
3. [Step] → verify: [check]
```
Strong success criteria let you loop independently. Weak criteria ("make it work") require constant clarification.
This skill provides behavioral guidelines to reduce common LLM coding mistakes by prioritizing caution, clarity, and minimalism. It helps agents and developers write, review, or refactor code with clear assumptions, surgical edits, and verifiable success criteria. Use it to avoid overcomplication and surface hidden tradeoffs during coding tasks.
The skill instructs the agent to explicitly state assumptions, enumerate interpretations, and ask clarifying questions before coding. It enforces a simplicity-first mindset: deliver the minimum code required and avoid speculative features. For existing code, it limits edits to only what the request requires and prescribes visible success criteria and verification steps for each change.
What if the simplest solution seems fragile later?
Document the assumptions and constraints that make the simple solution valid. If future flexibility is required, propose a follow-up change with clear scope and tests.
How do I decide when to refactor adjacent code?
Refactor adjacent code only if it's directly causing the requested change to be harder or incorrect. Otherwise note it as a separate recommendation and request permission to tackle it.
What if the user asks for 'make it better' without specifics?
Ask for success criteria and priorities (performance, readability, extensibility). Offer a minimal, safe improvement option plus a list of larger changes that would require broader edits and verification.