home / skills / levnikolaevich / claude-code-skills / ln-600-docs-auditor

ln-600-docs-auditor skill

/ln-600-docs-auditor

This skill audits project documentation quality across hierarchy, SSOT, compression, requirements, actuality, legacy, stack adaptation, and semantics,

npx playbooks add skill levnikolaevich/claude-code-skills --skill ln-600-docs-auditor

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
7.6 KB
---
name: ln-600-docs-auditor
description: Audit project documentation quality across 8 categories (Hierarchy, SSOT, Compactness, Requirements, Actuality, Legacy, Stack Adaptation, Semantic Content). Delegates to ln-601 for deep semantic verification of project documents. Use when documentation needs quality review, after major doc updates, or as part of ln-100-documents-pipeline. Outputs Compliance Score X/10 per category + Findings + Recommended Actions.
---

> **Paths:** File paths (`shared/`, `references/`, `../ln-*`) are relative to skills repo root. If not found at CWD, locate this SKILL.md directory and go up one level for repo root.

# Documentation Auditor

Audit project documentation quality. Universal for any tech stack.

## Purpose

- **Proactively compress** - find all opportunities to reduce size while preserving value
- Eliminate meaningless, redundant, and verbose content
- Convert prose to structured formats (tables, lists)
- Verify documentation hierarchy with CLAUDE.md as root
- Detect duplication and enforce Single Source of Truth
- Ensure docs match current code state
- **Semantic verification** - delegate to ln-601 to verify content matches SCOPE and codebase reality

## Invocation

- **Direct:** User invokes for documentation quality review
- **Pipeline:** Called by ln-100-documents-pipeline (Phase 5, if auditDocs=true)

## Workflow

1. **Scan:** Find all .md files in project (CLAUDE.md, README.md, docs/**)
2. **Build Tree:** Construct hierarchy from CLAUDE.md outward links
3. **Audit Categories 1-7:** Run structural checks (see Audit Categories below)
4. **Semantic Audit (Category 8):** For each project document, delegate to ln-601-semantic-content-auditor
5. **Score:** Calculate X/10 per category (including semantic scores from ln-601)
6. **Report:** Output findings and recommended actions

### Phase 4: Semantic Audit Delegation

For each project document (excluding tasks/, reference/, presentation/):

```
FOR doc IN [CLAUDE.md, docs/README.md, docs/project/*.md]:
    result = DELEGATE ln-601-semantic-content-auditor {
        doc_path: doc,
        project_root: project_root,
        tech_stack: detected_stack
    }
    semantic_findings.append(result.findings)
    semantic_scores[doc] = result.scores
```

**Target documents:** CLAUDE.md, docs/README.md, docs/documentation_standards.md, docs/principles.md, docs/project/*.md

**Excluded:** docs/tasks/, docs/reference/, docs/presentation/, tests/

## Audit Categories

| # | Category | What to Check |
|---|----------|---------------|
| 1 | **Hierarchy & Links** | CLAUDE.md is root; all docs reachable via links; no orphaned files; no broken links |
| 2 | **Single Source of Truth** | No content duplication; duplicates replaced with links to source; clear ownership |
| 3 | **Proactive Compression** | Eliminate verbose/redundant content; prose→tables; remove meaningless info; compress even under-limit files; see [size_limits.md](references/size_limits.md) |
| 4 | **Requirements Compliance** | Correct sections; within size limits; **no code blocks** (tables/ASCII diagrams/text only); stack-appropriate doc links |
| 5 | **Actuality (CRITICAL)** | **Verify facts against code:** paths exist, functions match, APIs work, configs valid; outdated docs are worse than none |
| 6 | **Legacy Cleanup** | No history sections; no "was changed" notes; no deprecated info; current state only |
| 7 | **Stack Adaptation** | Links/refs match project stack; no Python examples in .NET project; official docs for correct platform |
| 8 | **Semantic Content** | **Delegated to ln-601:** Content matches SCOPE; serves project goals; descriptions match actual code behavior; architecture/API docs reflect reality |

## Output Format

```markdown
## Documentation Audit Report - [DATE]

### Compliance Score

| Category | Score | Issues |
|----------|-------|--------|
| Hierarchy & Links | X/10 | N issues found |
| Single Source of Truth | X/10 | N duplications |
| Proactive Compression | X/10 | N compression opportunities |
| Requirements Compliance | X/10 | N violations |
| Actuality | X/10 | N mismatches with code |
| Legacy Cleanup | X/10 | N legacy items |
| Stack Adaptation | X/10 | N stack mismatches |
| Semantic Content | X/10 | N semantic issues (via ln-601) |
| **Overall** | **X/10** | |

### Critical Findings

- [ ] **[Category]** `path/file.md:line` - Issue description. **Action:** Fix suggestion.

### Recommended Actions

| Priority | Action | Location | Category |
|----------|--------|----------|----------|
| High | Remove duplicate section | docs/X.md | SSOT |
| Medium | Add link to CLAUDE.md | docs/Y.md | Hierarchy |
```

## Scoring Algorithm

**MANDATORY READ:** Load `shared/references/audit_scoring.md` for unified scoring formula.

**Severity mapping:**

| Issue Type | Severity |
|------------|----------|
| Outdated content (code mismatch) | CRITICAL |
| Broken links, orphaned docs | HIGH |
| Semantic mismatch (via ln-601) | HIGH |
| Content duplication | MEDIUM |
| Missing compression opportunity | LOW |

## Reference Files

- Size limits and targets: [references/size_limits.md](references/size_limits.md)
- Detailed checklist: [references/audit_checklist.md](references/audit_checklist.md)

## Definition of Done

- All .md files in project scanned and hierarchy tree built from CLAUDE.md
- Categories 1-7 (structural) audited with score X/10 each
- Category 8 (semantic) delegated to ln-601 for each target document; scores collected
- Overall Compliance Score calculated (average of 8 categories)
- Critical Findings listed with file:line, category, and fix suggestion
- Recommended Actions table generated with priority, action, location, category

## Critical Notes

- **Fix content, not rules:** NEVER modify standards/rules files (*_standards.md, *_rules.md, *_limits.md) to make violations pass. Always fix the violating files instead.
- **Verify facts against code:** Actively check every path, function name, API, config mentioned in docs. Run commands. Outdated docs mislead - they're worse than no docs.
- **Compress always:** Size limits are upper bounds, not targets. A 100-line file instead of 300 is a win. Always look for compression opportunities.
- **Meaningless content:** Remove filler words, obvious statements, over-explanations. If it doesn't add value, delete it.
- **No code in docs:** Documents describe algorithms in tables or ASCII diagrams. Code belongs in codebase.
  - **Forbidden:** Code blocks, implementation snippets
  - **Allowed:** Tables, ASCII diagrams, Mermaid, method signatures (1 line)
  - **Instead of code:** "See [Official docs](url)" or "See [src/file.cs:42](path#L42)"
- **Format Priority:** Tables/ASCII > Lists (enumerations only) > Text (last resort)
- **Stack adaptation:** Verify all documentation references match project stack. .NET project must not have Python examples. Check official doc links point to correct platform (Microsoft docs for C#, MDN for JS, etc.)
- **Code is truth:** When docs contradict code, always update docs. Never "fix" code to match documentation.
- **SSOT re-verification after fixes:** After making ANY documentation change, re-check that the fix maintains Single Source of Truth. If content exists in multiple files, keep it in the canonical source only and replace other occurrences with a link to that source (e.g., `See [section](path#anchor)`). Never duplicate content inline — always link. Canonical source hierarchy: CLAUDE.md → docs/README.md → docs/project/*.md → docs/reference/*.md.
- **Delete, don't archive:** Legacy content should be removed, not moved to "archive"
- **No history:** Documents describe current state only; git tracks history

---
**Version:** 4.0.0
**Last Updated:** 2026-01-28

Overview

This skill audits project documentation quality across eight focused categories and produces a compliance report with scores, findings, and recommended actions. It integrates structural checks with delegated deep semantic verification (ln-601) to ensure docs reflect the codebase and project scope. Use it to enforce Single Source of Truth, compress and clean docs, and verify actuality against the repository.

How this skill works

The auditor scans all Markdown files, builds a document tree from CLAUDE.md, and runs structural checks for Hierarchy, SSOT, Compression, Requirements, Actuality, Legacy, and Stack Adaptation. For semantic accuracy it delegates each target document to ln-601-semantic-content-auditor and aggregates scores. Finally it computes per-category X/10 scores, an overall average, and emits findings with prioritized remediation actions.

When to use it

  • Before a release or handoff to ensure docs match code
  • After major documentation updates to validate quality and SSOT
  • As part of ln-100-documents-pipeline (Phase 5) when auditDocs=true
  • When onboarding new maintainers who need a reliable doc baseline
  • When reducing repo size and eliminating verbose or legacy content

Best practices

  • Keep CLAUDE.md as the documented root and ensure every doc is reachable from it
  • Remove duplicated content; replace duplicates with links to the canonical source
  • Compress prose into tables or lists; avoid long narrative sections and meaningless filler
  • Verify every documented path, API, and function against the codebase; run commands where needed
  • Never place implementation code in docs; use tables, ASCII diagrams, or links to source instead
  • After any doc fix, re-run SSOT checks to preserve a single canonical copy

Example use cases

  • Audit a repo where READMEs drift from implemented APIs and return a prioritized fix list
  • Run automatically during docs CI to block merges that introduce broken links or outdated facts
  • Compress a set of large specification files into concise tables and link to canonical definitions
  • Validate stack-specific examples to remove incorrect platform references (e.g., Python in .NET projects)
  • Perform a legacy cleanup to delete history and archive-free legacy sections, producing a compact current-state docs tree

FAQ

What does the compliance score represent?

Each category is scored X/10 using the unified scoring formula; the overall score is the average of the eight category scores.

Which files are excluded from semantic delegation?

docs/tasks/, docs/reference/, docs/presentation/, tests/ and any non-target paths are excluded; target docs include CLAUDE.md and key docs under docs/project/.

Will the auditor modify files automatically?

No. It reports findings and recommended actions; fixes must be applied manually to ensure standards files are never edited to pass checks.