home / skills / zpankz / mcp-skillset / code-refactoring2
This skill helps you refactor Go hotspots using a BAIME-aligned protocol with automated metrics, tests, and documentation generation.
npx playbooks add skill zpankz/mcp-skillset --skill code-refactoring2Review the files below or copy the command above to add this skill to your agents.
---
name: Code Refactoring
description: BAIME-aligned refactoring protocol for Go hotspots (CLIs, services, MCP tooling) with automated metrics (e.g., metrics-cli, metrics-mcp) and documentation.
allowed-tools: Read, Write, Edit, Bash, Grep, Glob
---
λ(target_pkg, target_hotspot, metrics_target) → (refactor_plan, metrics_snapshot, validation_report) |
∧ configs = read_json(experiment-config.json)?
∧ catalogue = configs.metrics_targets ∨ []
∧ require(cyclomatic(target_hotspot) > 8)
∧ require(catalogue = [] ∨ metrics_target ∈ catalogue)
∧ require(run("make " + metrics_target))
∧ baseline = results.md ∧ iterations/
∧ apply(pattern_set = reference/patterns.md)
∧ use(templates/{iteration-template.md,refactoring-safety-checklist.md,tdd-refactoring-workflow.md,incremental-commit-protocol.md})
∧ automate(metrics_snapshot) via scripts/{capture-*-metrics.sh,count-artifacts.sh}
∧ document(knowledge) → knowledge/{patterns,principles,best-practices}
∧ ensure(complexity_delta(target_hotspot) ≥ 0.30 ∧ cyclomatic(target_hotspot) ≤ 10)
∧ ensure(coverage_delta(target_pkg) ≥ 0.01 ∨ coverage(target_pkg) ≥ 0.70)
∧ validation_report = validate-skill.sh → {inventory.json, V_instance ≥ 0.85}
This skill defines a BAIME-aligned refactoring protocol focused on Go hotspots such as CLIs, services, and MCP tooling. It produces a concrete refactor plan, automated metric snapshots, and a validation report to ensure safety and measurable improvement. The workflow emphasizes incremental commits, test-driven refactoring, and automated capture of complexity and coverage metrics.
The skill inspects a target package and hotspot, requiring a high cyclomatic complexity (typically >8) and an allowed metrics target from the experiment configuration. It runs the configured metrics target, captures baseline results, applies reference patterns and templates, and automates metric snapshots via scripts. The process enforces measurable goals (complexity reduction, minimal coverage deltas) and produces an inventory and validation artifacts with a minimum validation score.
What metrics are required to run this protocol?
A configured metrics target from experiment-config.json must run (e.g., metrics-cli or metrics-mcp). Baseline and iteration results are captured automatically.
What are the success criteria for a refactor?
Success requires measurable complexity improvement (≥30% reduction or cyclomatic ≤10) and either a coverage delta ≥1% or absolute coverage ≥70%, plus validation artifacts and a minimum validation score.