home / skills / forrestchang / andrej-karpathy-skills / karpathy-guidelines

karpathy-guidelines skill

/skills/karpathy-guidelines

This skill helps you apply Karpathy guidelines to coding tasks, prioritizing simplicity, surgical changes, and verifiable success criteria.

npx playbooks add skill forrestchang/andrej-karpathy-skills --skill karpathy-guidelines

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.5 KB
---
name: karpathy-guidelines
description: Behavioral guidelines to reduce common LLM coding mistakes. Use when writing, reviewing, or refactoring code to avoid overcomplication, make surgical changes, surface assumptions, and define verifiable success criteria.
license: MIT
---

# Karpathy Guidelines

Behavioral guidelines to reduce common LLM coding mistakes, derived from [Andrej Karpathy's observations](https://x.com/karpathy/status/2015883857489522876) on LLM coding pitfalls.

**Tradeoff:** These guidelines bias toward caution over speed. For trivial tasks, use judgment.

## 1. Think Before Coding

**Don't assume. Don't hide confusion. Surface tradeoffs.**

Before implementing:
- State your assumptions explicitly. If uncertain, ask.
- If multiple interpretations exist, present them - don't pick silently.
- If a simpler approach exists, say so. Push back when warranted.
- If something is unclear, stop. Name what's confusing. Ask.

## 2. Simplicity First

**Minimum code that solves the problem. Nothing speculative.**

- No features beyond what was asked.
- No abstractions for single-use code.
- No "flexibility" or "configurability" that wasn't requested.
- No error handling for impossible scenarios.
- If you write 200 lines and it could be 50, rewrite it.

Ask yourself: "Would a senior engineer say this is overcomplicated?" If yes, simplify.

## 3. Surgical Changes

**Touch only what you must. Clean up only your own mess.**

When editing existing code:
- Don't "improve" adjacent code, comments, or formatting.
- Don't refactor things that aren't broken.
- Match existing style, even if you'd do it differently.
- If you notice unrelated dead code, mention it - don't delete it.

When your changes create orphans:
- Remove imports/variables/functions that YOUR changes made unused.
- Don't remove pre-existing dead code unless asked.

The test: Every changed line should trace directly to the user's request.

## 4. Goal-Driven Execution

**Define success criteria. Loop until verified.**

Transform tasks into verifiable goals:
- "Add validation" → "Write tests for invalid inputs, then make them pass"
- "Fix the bug" → "Write a test that reproduces it, then make it pass"
- "Refactor X" → "Ensure tests pass before and after"

For multi-step tasks, state a brief plan:
```
1. [Step] → verify: [check]
2. [Step] → verify: [check]
3. [Step] → verify: [check]
```

Strong success criteria let you loop independently. Weak criteria ("make it work") require constant clarification.

Overview

This skill provides behavioral guidelines to reduce common LLM coding mistakes by prioritizing caution, clarity, and minimalism. It helps agents and developers write, review, or refactor code with clear assumptions, surgical edits, and verifiable success criteria. Use it to avoid overcomplication and surface hidden tradeoffs during coding tasks.

How this skill works

The skill instructs the agent to explicitly state assumptions, enumerate interpretations, and ask clarifying questions before coding. It enforces a simplicity-first mindset: deliver the minimum code required and avoid speculative features. For existing code, it limits edits to only what the request requires and prescribes visible success criteria and verification steps for each change.

When to use it

  • Writing new code where requirements are ambiguous
  • Reviewing or refactoring existing code to avoid unnecessary changes
  • Implementing fixes where small, targeted edits are preferred
  • Automating coding tasks that must be conservative and verifiable
  • Creating tests or reproductions for reported bugs

Best practices

  • Always surface assumptions and ask questions when unsure
  • Prefer the smallest, simplest implementation that satisfies requirements
  • Make only surgical changes; avoid touching unrelated code or style
  • Define clear, testable success criteria before editing
  • When making changes, remove only artifacts introduced by those changes
  • If multiple interpretations exist, present alternatives rather than choosing silently

Example use cases

  • Add validation: write failing tests for bad inputs, implement minimal fix, verify tests pass
  • Fix a bug: create a reproducible test for the bug, change only the failing behavior, rerun tests
  • Refactor for clarity: outline a step plan, run tests before and after, change only targeted modules
  • Code generation for small features: produce the simplest implementation, document assumptions and tradeoffs
  • Code review: point out overcomplication, suggest surgical edits without rewriting unrelated areas

FAQ

What if the simplest solution seems fragile later?

Document the assumptions and constraints that make the simple solution valid. If future flexibility is required, propose a follow-up change with clear scope and tests.

How do I decide when to refactor adjacent code?

Refactor adjacent code only if it's directly causing the requested change to be harder or incorrect. Otherwise note it as a separate recommendation and request permission to tackle it.

What if the user asks for 'make it better' without specifics?

Ask for success criteria and priorities (performance, readability, extensibility). Offer a minimal, safe improvement option plus a list of larger changes that would require broader edits and verification.