home / skills / phrazzld / claude-config / skill-builder

skill-builder skill

/skills/skill-builder

This skill automatically creates reusable Claude Code skills when it detects repeatable workflows or novel solutions, then informs you what was created.

npx playbooks add skill phrazzld/claude-config --skill skill-builder

Review the files below or copy the command above to add this skill to your agents.

Files (6)
SKILL.md
4.6 KB
---
name: skill-builder
description: |
  Create new Claude Code skills from procedural knowledge. Invoke PROACTIVELY when:
  - Discovering a reusable workflow pattern worth preserving
  - User asks to "capture this as a skill" or "make this reusable"
  - Recognizing institutional knowledge that should persist
  - After solving a problem in a novel way worth repeating
  - Noticing repeated patterns across sessions
  AUTONOMOUS: Create skills proactively, then inform user what was created.
effort: high
---

# Skill Builder

Build new Claude Code skills that capture procedural knowledge.

## Autonomy Model

**Create then inform**: When recognizing skill-worthy knowledge, create the skill proactively, then inform the user what was created and why. Don't ask permission first.

## When to Invoke (Proactively)

1. **Successful novel solution** - Just solved something in a way worth repeating
2. **Repeated pattern** - Noticed doing the same thing multiple times
3. **Institutional knowledge** - Learning domain-specific rules that should persist
4. **User workflow** - User demonstrates a process they want automated
5. **Research findings** - Discovered best practices worth preserving

## Quality Gates (Pre-Extraction)

Before creating a skill, verify ALL gates pass:

| Gate | Question | Fail Criteria |
|------|----------|---------------|
| **REUSABLE** | Applies beyond this instance? | One-off solution |
| **NON-TRIVIAL** | Required discovery, not docs lookup? | Just followed documentation |
| **SPECIFIC** | Clear trigger conditions defined? | Vague "sometimes useful" |
| **VERIFIED** | Solution confirmed working? | Theoretical, untested |

If ANY gate fails → Stop. Not skill-worthy.

## Skill Creation Workflow

### 0. Research Best Practices

**Before extracting, search for current patterns:**

```bash
# Use Gemini CLI for web-grounded research
gemini "[technology] [feature] best practices 2026"
gemini "[technology] [problem type] official recommendations"
```

Why: Don't just codify what you did. Incorporate current best practices.
Skip if: Pattern is project-specific internal convention.

### 0.5. Classify Skill Type

Before creating, determine if foundational or workflow:

| Type | Characteristics | Frontmatter | Action |
|------|-----------------|-------------|--------|
| **Foundational** | Universal patterns, applies broadly, no explicit trigger needed | `user-invocable: false` | Add compressed summary to CLAUDE.md |
| **Workflow** | Explicit trigger, action-oriented, user/model invokes | (default) | Skill only |

**If Foundational:**
1. Write skill as normal with `user-invocable: false`
2. Create 20-30 line compressed summary extracting core principles
3. Add summary to CLAUDE.md "Passive Knowledge Index" section
4. Add skill to Skills Index (pipe-delimited format)
5. Inform user: "Added to passive context — no invocation needed"

**If Workflow:**
1. Write skill as normal (default is invocable)
2. No CLAUDE.md changes needed
3. Inform user of trigger terms

**Foundational indicators:**
- Would benefit all code, not specific triggers
- Patterns that should "always be on"
- Language-agnostic principles
- Universal conventions (naming, testing, docs)

**Workflow indicators:**
- Explicit action verb (audit, configure, check, fix)
- Specific domain (stripe, posthog, lightning)
- User would say "/skill-name" to invoke

### 1. Identify the Knowledge
- What problem does this solve?
- What trigger terms would activate it?
- Is it cross-project or project-specific?

### 2. Draft Structure
Reference `references/structure-guide.md` for ideal anatomy.

### 3. Write Description
Reference `references/description-patterns.md` for trigger-rich descriptions (~100 words, explicit trigger terms).

### 4. Validate
Run `scripts/validate_skill.py <skill-path>` to check structure and frontmatter.

### 5. Inform User
After creating, tell user:
- What skill was created and why
- What triggers will activate it
- How to test it works

## Progressive Disclosure

Keep SKILL.md lean (<100 lines). Put detailed specs in `references/`:
- Detailed examples → `references/examples.md`
- Edge cases → `references/edge-cases.md`
- Anti-patterns → `references/anti-patterns.md`

## Code Opportunities

If skill involves deterministic operations (validation, parsing, extraction), create `scripts/` with executable code rather than prose instructions. Scripts:
- Run without loading into context
- Must be executable (`chmod +x`)
- Should handle errors gracefully

## Skill Locations

- Personal: `~/.claude/skills/` - Available across all projects
- Project: `.claude/skills/` - Shared with team via git

## Template

Use `templates/SKILL-TEMPLATE.md` as starting point for new skills.

Overview

This skill creates new Claude Code skills from procedural knowledge discovered during work. It operates proactively: when a reusable workflow, repeated pattern, or institutional rule is detected, it generates a new skill and then informs the user what was created and why. The goal is to preserve repeatable solutions and make them invocable or passively available across sessions.

How this skill works

The skill monitors sessions for novel, verified, and non-trivial procedures that meet quality gates (reusable, specific, verified). When a candidate is found it classifies the knowledge as foundational or workflow, gathers context and best-practice references, drafts the skill structure and description, validates the artifact, and creates executable scripts when appropriate. After creation it reports the new skill, trigger terms, and test steps to the user.

When to use it

  • After solving a problem with a repeatable, non-trivial method worth preserving
  • When observing the same sequence of steps across multiple sessions or users
  • When domain or project rules emerge that should persist as institutional knowledge
  • When a user asks to “capture this as a skill” or “make this reusable”
  • When research or experiments produce a best-practice workflow worth codifying

Best practices

  • Run quick external research on current best practices before extracting to avoid codifying outdated approaches
  • Apply strict quality gates: reusable, non-trivial, specific triggers, and verified outcome — stop if any gate fails
  • Classify as foundational (passive principles) or workflow (invocable) to decide invocation and placement
  • Create small executable scripts for deterministic tasks instead of long prose when possible
  • Keep skill entries compact and place detailed examples, edge cases, and anti-patterns in supporting reference files
  • Inform the user immediately after creation with why it was created, triggers, and how to test

Example use cases

  • Captured a multi-step data validation routine used across projects and generated a validation skill with a CLI script
  • Noticed a repeated deployment checklist and created a workflow skill triggered by 'run deployment audit'
  • Converted a discovered security hardening pattern into a foundational note so it applies silently across projects
  • Automated a custom error-parsing routine into an executable script and added trigger phrases for quick invocation
  • Turned an ad-hoc API pagination solution into a reusable workflow skill with usage examples and tests

FAQ

Will the skill create entries without asking me first?

Yes — this skill follows a create-then-inform model: it creates the skill proactively but always informs you afterward with details and test steps.

How do you decide if something is skill-worthy?

Every candidate must pass four gates: reusable beyond the current instance, non-trivial discovery, clear trigger conditions, and a verified working solution.