home / skills / greyhaven-ai / claude-code-config / documentation-alignment

This skill verifies code against documentation using a 6-phase alignment process, delivering scoring and actionable fixes to reduce onboarding friction.

npx playbooks add skill greyhaven-ai/claude-code-config --skill documentation-alignment

Review the files below or copy the command above to add this skill to your agents.

Files (6)
SKILL.md
1.6 KB
---
name: grey-haven-documentation-alignment
description: "6-phase verification system ensuring code matches documentation with automated alignment scoring (signature, type, behavior, error, example checks). Reduces onboarding friction 40%. Use when verifying code-docs alignment, onboarding developers, after code changes, pre-release documentation checks, or when user mentions 'docs out of sync', 'documentation verification', 'code-docs alignment', 'docs accuracy', 'documentation drift', or 'verify documentation'."
# v2.0.43: Skills to auto-load for docs verification
skills:
  - grey-haven-code-style
# v2.0.74: Tools for documentation alignment verification
allowed-tools:
  - Read
  - Grep
  - Glob
  - TodoWrite
---

# Documentation Alignment Skill

6-phase verification ensuring code implementations match their documentation with automated alignment scoring.

## Description

Systematic verification of code-documentation alignment through discovery, extraction, analysis, classification, fix generation, and validation.

## What's Included

- **Examples**: Function signature mismatches, parameter changes, type updates
- **Reference**: 6-phase process, alignment scoring formula
- **Templates**: Alignment report structures
- **Checklists**: 101-point verification checklist

## Alignment Scoring

Score = (Signature×30% + Type×25% + Behavior×20% + Error×15% + Example×10%)
- 95-100: Perfect
- 80-94: Good
- 60-79: Poor
- 0-59: Failing

## Use When

- Onboarding new developers (reduces friction 40%)
- After code changes
- Pre-release documentation verification

## Related Agents

- `documentation-alignment-verifier`

**Skill Version**: 1.0

Overview

This skill implements a 6-phase verification system that ensures code matches its documentation by producing an automated alignment score. It identifies signature, type, behavior, error, and example mismatches and generates actionable fixes and a structured alignment report. The result is clearer documentation, fewer onboarding issues, and faster pre-release checks.

How this skill works

The system discovers code and documentation pairs, extracts signatures and examples, and performs automated analyses across six phases: discovery, extraction, analysis, classification, fix generation, and validation. It computes a weighted alignment score (signature 30%, type 25%, behavior 20%, error handling 15%, examples 10%) and classifies results into Perfect, Good, Poor, or Failing. Reports include identified mismatches, suggested fixes, and a verification checklist to guide remediation.

When to use it

  • After code changes that might affect public APIs or examples
  • During pre-release documentation verification and QA gates
  • When onboarding new developers to reduce ramp-up friction
  • When a user reports ‘docs out of sync’ or requests documentation verification
  • As a periodic audit to detect documentation drift

Best practices

  • Run the verification as part of CI for pull requests touching public interfaces
  • Prioritize fixes by alignment score component (start with signature and type mismatches)
  • Attach the generated alignment report to the related issue or PR for context
  • Use the 101-point verification checklist to validate manual and edge-case scenarios
  • Re-run validation after applying suggested fixes to ensure score improvement

Example use cases

  • Detecting a changed function signature after a refactor and generating a doc update patch
  • Verifying example snippets match actual behavior and error conditions before release
  • Onboarding: running alignment checks on key libraries to highlight doc gaps for new hires
  • Post-merge verification to ensure a hotfix didn’t introduce documentation drift
  • Automated gating to block releases when alignment score falls into Poor or Failing ranges

FAQ

How is the alignment score calculated?

The score is a weighted sum: Signature 30%, Type 25%, Behavior 20%, Error handling 15%, Example correctness 10%.

What score thresholds indicate urgent fixes?

Scores 0–59 are Failing and require immediate attention; 60–79 are Poor and should be fixed before release; 80–94 are Good; 95–100 are Perfect.