home / skills / bahayonghang / my-claude-code-settings / skill_optimizer

This skill analyzes Claude Code skills for compliance and token efficiency, guiding improvements and generating an optimized, ready-to-deploy SKILL.md.

npx playbooks add skill bahayonghang/my-claude-code-settings --skill skill_optimizer

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
1.3 KB
---
name: skill-optimizer
description: Analyze Claude Code skills for compliance and token efficiency. Use when reviewing skills.
category: skill-management
tags:
  - optimization
  - analysis
  - skill-authoring
argument-hint: [skill-directory-path]
allowed-tools: Read, Glob, Grep, Bash(python *)
---

Optimize the Claude Code skill at `$ARGUMENTS`.

## Steps

1. If `$ARGUMENTS` is empty or the path does not contain SKILL.md, report: "Error: Provide a valid skill directory path containing SKILL.md."
2. Run: `python "$SKILL_DIR/scripts/analyze_skill.py" "$ARGUMENTS"`
3. Read `$SKILL_DIR/resources/CHECKLIST.md` and `$SKILL_DIR/resources/PATTERNS.md`.
4. Cross-reference JSON report with CHECKLIST and PATTERNS.
5. Present findings: **Critical → Recommended → Optional**, each with before/after fix.
6. Generate optimized SKILL.md resolving all Critical and Recommended issues.

## Output

Report issues by severity, then a token budget table (Before/After/Δ), then the full optimized SKILL.md.

## Rules

- Only official frontmatter fields: name, description, argument-hint, disable-model-invocation, user-invocable, allowed-tools, model, context, agent, hooks, category, tags.
- Optimized SKILL.md: < 300 tokens body, imperative voice, no educational content inline.
- Preserve original intent. Move reference content to resources/.

Overview

This skill analyzes Claude Code skills for compliance, structure, and token efficiency. It automates detection of critical, recommended, and optional fixes and produces an optimized skill manifest that preserves intent while reducing token cost. The skill also produces a token budget comparison showing before/after savings.

How this skill works

It runs a static analysis script against a skill directory, then reads canonical checklist and pattern resources to cross-reference findings. The tool classifies issues into Critical, Recommended, and Optional, and creates concrete before/after fixes for each finding. Finally, it generates an optimized skill manifest constrained to allowed frontmatter fields and a concise body under the specified token budget.

When to use it

  • Before submitting a skill for review to ensure compliance and efficiency
  • When preparing a skill for production to reduce model invocation cost
  • When consolidating multiple fixes into a prioritized remediation plan
  • During audits to create a clear before/after token budget report

Best practices

  • Run analysis on the canonical skill directory that contains the skill manifest and resources
  • Address Critical issues first, then Recommended, and optionally Optional items
  • Keep the manifest body imperative, directive, and under the token budget limit
  • Move long reference material into resources to keep the manifest concise
  • Use the token budget table to guide trimming of prompts and examples

Example use cases

  • Automating a pre-release compliance check that enforces frontmatter field rules
  • Reducing token usage for high-traffic skills to lower runtime costs
  • Consolidating audit findings into an actionable remediation list with before/after examples
  • Generating a compact, production-ready skill manifest that preserves original behavior

FAQ

What does the optimizer require as input?

A valid skill directory containing the skill manifest and resource files.

Will the optimizer change behavior?

It preserves intent but trims and restructures content for compliance and token efficiency; all changes are presented as before/after fixes.