home / skills / bahayonghang / my-claude-code-settings / skill_audit

skill_audit skill

/content/skills/skill-meta-skills/skill_audit

This skill analyzes Claude Code skills for compliance and token efficiency, enabling rapid improvement through actionable recommendations.

npx playbooks add skill bahayonghang/my-claude-code-settings --skill skill_audit

Review the files below or copy the command above to add this skill to your agents.

Files (5)
SKILL.md
1.1 KB
---
name: skill-audit
description: Analyze Claude Code skills for compliance and token efficiency. Use when reviewing skills.
category: skill-management
tags:
  - optimization
  - analysis
  - skill-authoring
argument-hint: [skill-directory-path]
allowed-tools: Read, Glob, Grep, Bash(python *)
---

Audit the skill at `$ARGUMENTS`.

## Steps

1. If `$ARGUMENTS` empty or no SKILL.md found, report error.
2. Run: `python "$SKILL_DIR/scripts/analyze_skill.py" "$ARGUMENTS"`
3. Read `$SKILL_DIR/resources/CHECKLIST.md` and `$SKILL_DIR/resources/PATTERNS.md`.
4. Cross-reference JSON with CHECKLIST and PATTERNS.
5. If parent directory has sibling skills, run: `python "$SKILL_DIR/scripts/detect_overlap.py" "<parent>" --target "<name>"`
6. Present: **Critical → Recommended → Optional**, each with before/after fix.
7. Output optimized SKILL.md resolving Critical and Recommended issues.

## Output

Issues by severity, token budget table (Before/After/Δ), overlap report (if any), optimized SKILL.md.

## Rules

- Official frontmatter fields only.
- Body < 300 tokens, imperative voice, no educational content.
- Preserve intent. Move reference content to resources/.

Overview

This skill audits Claude Code skills for compliance, security patterns, and token efficiency. It produces actionable issue lists prioritized by severity and an optimized primary manifest that fixes critical and recommended problems while preserving original intent. The output includes token budget comparisons and any detected functional overlap with sibling skills.

How this skill works

The auditor runs a static analysis of the skill package, extracts the skill manifest and analysis JSON, and cross-references findings against checklist and pattern libraries. It detects overlapping behavior with sibling skills in the same parent directory and computes token usage before and after proposed fixes. Finally, it generates prioritized remediation items and an updated manifest that applies Critical and Recommended fixes.

When to use it

  • Before publishing a skill to a catalog
  • During pre-release compliance and cost reviews
  • When reducing token consumption for high-traffic skills
  • After merging related skills to detect functional overlap
  • During periodic security and pattern audits

Best practices

  • Keep the primary manifest concise and imperative; preserve intent while minimizing tokens
  • Address Critical items immediately, then focus on Recommended fixes in the next release
  • Maintain pattern and checklist resources in a resources/ folder for easy cross-reference
  • Run overlap detection when multiple skills share the same parent to avoid duplicate functionality
  • Document resource references separately from the primary manifest

Example use cases

  • Audit a skill prior to deployment to ensure policy and pattern compliance
  • Compare token budgets before and after remediation for ROI estimates
  • Detect and consolidate overlapping abilities across sibling skills
  • Produce a prioritized remediation plan for engineering teams
  • Generate an optimized primary manifest that resolves high-severity issues

FAQ

What does the audit output include?

A prioritized list of issues (Critical → Recommended → Optional), a token budget table showing Before/After/Δ, an overlap report if siblings exist, and an optimized primary manifest applying Critical and Recommended fixes.

Will the audit change intent or behavior?

No. Fixes preserve the original intent; changes focus on compliance, security patterns, and token efficiency while moving reference material into resources/.