home / skills / levnikolaevich / claude-code-skills / ln-610-code-comments-auditor

ln-610-code-comments-auditor skill

/ln-610-code-comments-auditor

This skill audits code comments and docstrings across six categories, delivering per-category scores, findings, and actionable recommendations.

npx playbooks add skill levnikolaevich/claude-code-skills --skill ln-610-code-comments-auditor

Review the files below or copy the command above to add this skill to your agents.

Files (3)
SKILL.md
4.4 KB
---
name: ln-610-code-comments-auditor
description: Audit code comments and docstrings quality across 6 categories (WHY-not-WHAT, Density, Forbidden Content, Docstrings, Actuality, Legacy). Use when code needs comment review, after major refactoring, or as part of ln-100-documents-pipeline. Outputs Compliance Score X/10 per category + Findings + Recommended Actions.
---

> **Paths:** File paths (`shared/`, `references/`, `../ln-*`) are relative to skills repo root. If not found at CWD, locate this SKILL.md directory and go up one level for repo root.

# Code Comments Auditor

Audit code comments and docstrings quality. Universal for any tech stack.

## Purpose

- Verify comments explain WHY, not obvious WHAT
- Check comment density (15-20% ratio)
- Detect forbidden content (dates, author names, historical notes)
- Validate docstrings match function signatures
- Ensure comments match current code state
- Identify legacy comments and commented-out code

## Invocation

- **Direct:** User invokes for code comment quality review
- **Pipeline:** Called by ln-100-documents-pipeline (Phase 5, if auditComments=true)

## Workflow

1. **Scan:** Find all source files (auto-detect tech stack)
2. **Extract:** Parse inline comments + docstrings/JSDoc
3. **Audit:** Run 6 category checks (see Audit Categories below)
4. **Score:** Calculate X/10 per category
5. **Report:** Output findings and recommended actions

## Audit Categories

| # | Category | What to Check |
|---|----------|---------------|
| 1 | **WHY not WHAT** | Comments explain rationale, not obvious code behavior; no restating code |
| 2 | **Density (15-20%)** | Comment-to-code ratio within range; not over/under-commented |
| 3 | **No Forbidden Content** | No dates/authors; no historical notes; no code examples in comments |
| 4 | **Docstrings Quality** | Match function signatures; parameters documented; return types accurate |
| 5 | **Actuality** | Comments match code behavior; no stale references; examples runnable |
| 6 | **Legacy Cleanup** | No TODO without context; no commented-out code; no deprecated notes |

## Output Format

```markdown
## Code Comments Audit Report - [DATE]

### Compliance Score

| Category | Score | Issues |
|----------|-------|--------|
| WHY not WHAT | X/10 | N obvious comments |
| Density (15-20%) | X/10 | X% actual (target: 15-20%) |
| No Forbidden Content | X/10 | N forbidden items |
| Docstrings Quality | X/10 | N mismatches |
| Actuality | X/10 | N stale comments |
| Legacy Cleanup | X/10 | N legacy items |
| **Overall** | **X/10** | |

### Critical Findings

- [ ] **[Category]** `path/file:line` - Issue description. **Action:** Fix suggestion.

### Recommended Actions

| Priority | Action | Location | Category |
|----------|--------|----------|----------|
| High | Remove author name | src/X:45 | Forbidden |
| Medium | Update stale docstring | lib/Y:120 | Actuality |
```

## Scoring Algorithm

**MANDATORY READ:** Load `shared/references/audit_scoring.md` for unified scoring formula.

**Severity mapping:**

| Issue Type | Severity |
|------------|----------|
| Author names, dates in comments | CRITICAL |
| Commented-out code blocks | HIGH |
| Stale/outdated comments | HIGH |
| Obvious WHAT comments | MEDIUM |
| Density deviation >5% | MEDIUM |
| Minor density deviation | LOW |

## Reference Files

- Comment rules and patterns: [references/comments_rules.md](references/comments_rules.md)

## Definition of Done

- All source files scanned (tech stack auto-detected)
- Inline comments and docstrings/JSDoc extracted and parsed
- All 6 categories audited with score X/10 each (WHY-not-WHAT, Density, Forbidden, Docstrings, Actuality, Legacy)
- Comment-to-code density ratio calculated and compared against 15-20% target
- Critical Findings listed with file:line, category, and fix suggestion
- Recommended Actions table generated with priority, action, location, category

## Critical Notes

- **Fix code, not rules:** NEVER modify rules files (*_rules.md, *_standards.md) to make violations pass. Always fix the code instead.
- **Code is truth:** When comment contradicts code, flag comment for update
- **WHY > WHAT:** Comments explaining obvious behavior should be removed
- **Task IDs OK:** Task/Story IDs in comments help with code traceability
- **Universal:** Works with any language; detect comment syntax automatically
- **Based on:** Claude Code comment-analyzer agent patterns

---
**Version:** 3.0.0
**Last Updated:** 2025-12-23

Overview

This skill audits code comments and docstrings quality across six focused categories and produces a compliance report with actionable fixes. It is designed for use after refactors, as part of documentation pipelines, or whenever comment quality gates are required. The output includes X/10 scores per category, detailed findings with file:line references, and prioritized recommended actions.

How this skill works

The auditor scans a codebase, auto-detects languages and comment syntax, and extracts inline comments, docstrings, and JSDoc blocks. It runs six checks (WHY-not-WHAT, Density, Forbidden Content, Docstrings, Actuality, Legacy), computes a score per category using a unified scoring formula, and generates a structured report with critical findings and remediation steps.

When to use it

  • After major refactoring to ensure comments match updated code behavior
  • Before a release or merge to verify documentation quality and compliance
  • As a phase in an automated documents pipeline (e.g., Phase 5 of docs workflow)
  • During code-quality gates or code review automation
  • When onboarding a legacy codebase to identify stale or forbidden comments

Best practices

  • Run the auditor early in CI and after large merges to catch regressions quickly
  • Prioritize fixing CRITICAL and HIGH severity items (forbidden content, commented-out code, stale comments)
  • Keep comments focused on rationale (WHY) and remove obvious WHAT statements
  • Maintain docstrings that mirror function signatures and document params/returns
  • Treat the report as a task list: create tickets for Medium/High priorities and track fixes

Example use cases

  • Integrate into a CI pipeline to block merges when comment density or forbidden content fails thresholds
  • Audit a migrated codebase to remove author names, dates, and historical notes from comments
  • Run after automated refactors to catch outdated docstrings and stale examples
  • Generate a remediation plan for technical debt that includes comment cleanup and docstring fixes

FAQ

Which languages are supported?

Any language supported — the auditor auto-detects comment and docstring syntax for common stacks.

How are scores calculated?

Scores use a unified scoring formula; severity mappings (CRITICAL/HIGH/MEDIUM/LOW) weight issues per category to produce an X/10 score.