home / skills / phrazzld / claude-config / codify-learning

codify-learning skill

/skills/codify-learning

This skill converts session learnings into durable, executable improvements by codifying fixes, feedback, and insights, speeding future debugging.

npx playbooks add skill phrazzld/claude-config --skill codify-learning

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.1 KB
---
name: codify-learning
description: |
  Transform session learnings into permanent, executable improvements.
  Invoke at end of any session that involved debugging, fixing, or learning something.
  Default: Codify everything. Exception: Justify not codifying.
effort: high
---

# /codify-learning

Transform ephemeral learnings into durable system improvements.

## Philosophy

**Default codify, justify exceptions.** Every correction, feedback, or "I should have known" moment represents a gap in the system. Codification closes that gap.

The "3+ occurrences" threshold is a myth - we have no cross-session memory. If you learned something, codify it.

## Process

### 1. Identify Learnings

Scan the session for:
- Errors encountered and how they were fixed
- PR feedback received
- Debugging insights ("the real problem was...")
- Workflow improvements discovered
- Patterns that should be enforced

### 2. Brainstorm Codification Targets

For each learning, consider:
- **Hook** - Should this be guaranteed/blocked? (most deterministic)
- **Agent** - Should a reviewer catch this pattern?
- **Skill** - Is this a reusable workflow?
- **CLAUDE.md** - Is this philosophy/convention?

Choose the target that provides the most leverage. Hooks > Agents > Skills > CLAUDE.md for enforcement. Skills > CLAUDE.md for workflows.

### 3. Implement

For each codification:
1. Read the target file
2. Add the learning in appropriate format
3. Wire up if needed (hooks need settings.json entry)
4. Verify no duplication

### 4. Report

```
CODIFIED:
- [learning] → [file]: [summary of change]

NOT CODIFIED:
- [learning]: [justification - must be specific]
```

## Anti-Patterns

❌ "No patterns detected" - One occurrence is enough
❌ "First time seeing this" - No cross-session memory exists
❌ "Seems too minor" - Minor issues compound into major friction
❌ "Not sure where to put it" - Brainstorm, ask, don't skip
❌ "Already obvious" - If it wasn't codified, the system didn't know it

See CLAUDE.md "Continuous Learning Philosophy" for valid exceptions and the full codification philosophy.

Overview

This skill turns session learnings into durable, executable improvements. It defaults to codifying any insight from debugging, fixes, or feedback, and requires a concrete justification when something is not codified. The goal is to close repeatable gaps by adding checks, automations, or documentation.

How this skill works

At session end, the skill scans for errors fixed, review feedback, debugging discoveries, and workflow improvements. It maps each learning to an enforcement or documentation target (hook, agent, skill, or CLAUDE.md), implements the change, and records a short report of what was codified and what was not. The skill prefers guaranteed enforcement (hooks) and documents philosophy in CLAUDE.md when appropriate.

When to use it

  • After any debugging session where a fix was applied
  • When PR feedback exposed a recurrent gap or omission
  • After discovering a configuration or workflow that caused mistakes
  • When you identify a pattern that should be enforced or automated
  • At the end of learning-focused sessions to prevent knowledge loss

Best practices

  • Default to codifying: treat each learning as actionable unless you record a precise reason not to
  • Pick the highest-leverage target: hooks > agents > skills > CLAUDE.md for enforcement
  • Keep changes minimal and reviewable: add focused checks or short guidance rather than sweeping edits
  • Verify and avoid duplication: read existing targets before implementing
  • Report clearly: list CODIFIED and NOT CODIFIED items with concise justifications

Example use cases

  • A bug was traced to a mis-parsed config: add a hook to validate that config key and document the pattern in CLAUDE.md
  • A reviewer flagged a common unsafe pattern: codify a static analysis rule or linter and attach a test case
  • Repeated developer confusion about a workflow step: implement a small skill that automates the step and add a how-to in CLAUDE.md
  • A flaky test revealed an edge case: add a targeted test and a pre-commit check to catch it
  • You learned a faster diagnostic command: codify it as a skill or snippet for immediate reuse

FAQ

What if the learning is truly one-off?

Still document it and explain why it should not be codified. A brief justification prevents future rework and preserves the context for others.

How do I choose between a hook and an agent?

Prefer hooks when you can deterministically block or enforce a condition. Use agents for patterns that require reviewer judgment or contextual checks.