home / skills / anton-abyzov / specweave / code-standards-detective

code-standards-detective skill

/plugins/specweave/skills/code-standards-detective

This skill uncovers real coding standards in a TypeScript codebase using evidence, statistics, and concrete examples to guide improvements.

This is most likely a fork of the sw-code-standards-detective skill from openclaw
npx playbooks add skill anton-abyzov/specweave --skill code-standards-detective

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
3.0 KB
---
name: code-standards-detective
description: Deep codebase analysis to discover actual coding standards through statistical evidence. Use when analyzing naming conventions, import patterns, or detecting anti-patterns in existing code. Provides evidence-based detection of how the codebase actually works (not aspirations).
allowed-tools: Read, Grep, Glob, Bash, Write
---

# Code Standards Detective Skill

## Overview

You analyze codebases to discover and document coding standards. You detect patterns, conventions, and anti-patterns with statistical evidence.

## Progressive Disclosure

Load phases as needed:

| Phase | When to Load | File |
|-------|--------------|------|
| Config Analysis | Parsing config files | `phases/01-config-analysis.md` |
| Pattern Detection | Finding code patterns | `phases/02-pattern-detection.md` |
| Report Generation | Creating standards doc | `phases/03-report.md` |

## Core Principles

1. **Evidence-based** - Statistics and confidence levels
2. **Real examples** - Code snippets from actual codebase
3. **Actionable** - Clear guidelines, not just observations

## Quick Reference

### Detection Categories

1. **Naming Conventions**
   - Variables: camelCase, PascalCase, UPPER_SNAKE
   - Functions: verb prefixes (get, set, is, has)
   - Files: kebab-case, PascalCase

2. **Import Patterns**
   - Absolute vs relative imports
   - Import ordering
   - Named vs default exports

3. **Function Characteristics**
   - Average length
   - Parameter counts
   - Return type patterns

4. **Type Usage**
   - any usage percentage
   - Interface vs type
   - Strictness level

5. **Error Handling**
   - try-catch patterns
   - Error types used
   - Logging patterns

### Config Files to Parse

```
.eslintrc.js / .eslintrc.json
.prettierrc / .prettierrc.json
tsconfig.json
.editorconfig
```

### Output Format

```markdown
# Coding Standards: [Project Name]

## Naming Conventions

### Variables
**Pattern**: camelCase
**Confidence**: 94% (842/896 samples)
**Example**:
```typescript
const userName = 'John';
const isActive = true;
```

### Functions
**Pattern**: verb + noun (getUser, setConfig)
**Confidence**: 87% (234/269 samples)

## Import Patterns
**Absolute imports**: Enabled (paths in tsconfig)
**Import order**: external → internal → relative
**Example**:
```typescript
import { z } from 'zod';           // external
import { logger } from '@/lib';    // internal
import { helper } from './helper'; // relative
```

## Anti-Patterns Detected
- ⚠️ `any` usage: 12 instances (recommend: 0)
- ⚠️ console.log: 8 instances (use logger)
```

## Workflow

1. **Parse configs** (< 500 tokens): ESLint, Prettier, TypeScript
2. **Detect patterns** (< 600 tokens per category): With stats
3. **Generate report** (< 600 tokens): Standards document

## Token Budget

**NEVER exceed 2000 tokens per response!**

## Detection Commands

```bash
# Count naming patterns
grep -rE "const [a-z][a-zA-Z]+ =" src/ | wc -l

# Find function patterns
grep -rE "function (get|set|is|has)" src/ | head -20

# Check for any usage
grep -rE ": any" src/ | wc -l
```

Overview

This skill performs deep analysis of a TypeScript codebase to discover the coding standards that are actually in use. It produces evidence-backed findings with confidence metrics, code examples, and actionable recommendations. Use it to turn implicit patterns into a reproducible standards document.

How this skill works

The skill parses configuration files (ESLint, Prettier, tsconfig, .editorconfig) and scans source files to extract statistical signals. It classifies naming conventions, import patterns, type and error-handling usage, and common anti-patterns, reporting frequencies and confidence scores with representative code snippets. Finally it formats a concise standards report that highlights both dominant conventions and recommended fixes.

When to use it

  • Onboard new contributors so they follow the project’s real conventions
  • Before standardizing or enforcing linters to avoid breaking existing patterns
  • Audit a legacy TypeScript codebase to identify risky anti-patterns (e.g., any usage, console.log)
  • Generate a living coding-standards document for documentation or developer portals
  • Assess how CI/lint config aligns with actual source code behavior

Best practices

  • Parse config files first to get intended rules, then validate against code statistics
  • Report both majority patterns and notable minorities (with examples) to avoid false assumptions
  • Include confidence percentages and raw sample counts for every detected rule
  • Provide minimal, actionable remediation steps for each anti-pattern
  • Limit per-category output size to keep reports readable and machine-parsable

Example use cases

  • Detect whether variables use camelCase or snake_case across the repo and produce sample counts
  • Identify whether absolute imports are used and if tsconfig paths are configured accordingly
  • Measure function complexity by average length and parameter counts to guide refactoring targets
  • Locate and quantify use of any, console.log, and other risky constructs and propose remediation
  • Generate a standards.md that can be added to the project docs with snippets and confidence metrics

FAQ

How precise are the confidence numbers?

Confidence numbers are statistical counts from scanned samples: they reflect observed frequency, not absolute correctness. Always inspect examples for edge cases.

Which files does the skill examine first?

It prioritizes ESLint, Prettier, tsconfig.json, and .editorconfig to infer intended rules before scanning source files for actual patterns.