home / skills / physics91 / claude-vibe / code-reviewer

code-reviewer skill

/skills/code-reviewer

This skill analyzes code quality, detects smells, and suggests focused improvements for reviews and pre-merge checks.

npx playbooks add skill physics91/claude-vibe --skill code-reviewer

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.5 KB
---
name: code-reviewer
description: |
  WHEN: Code review, quality check, code smell detection, refactoring suggestions
  WHAT: Complexity analysis + code smell list + severity-based issues + improvement suggestions
  WHEN NOT: Next.js specific → nextjs-reviewer, Security → security-scanner, Performance → perf-analyzer
---

# Code Reviewer Skill

## Purpose
Analyzes code quality, detects code smells, and suggests improvements.

## When to Use
- Code review requests
- Code quality, code smell mentions
- Post-implementation review
- Pre-merge PR review

## Workflow

### Step 1: Review Scope
**AskUserQuestion:**
```
"What code should I review?"
Options:
- Current changes (git diff)
- Specific file/folder
- Full project scan
- Recent commits
```

### Step 2: Review Focus
**AskUserQuestion:**
```
"What should I focus on?"
Options:
- Full quality check (recommended)
- Bugs/Logic errors
- Code style/Readability
- Performance issues
- Security vulnerabilities
multiSelect: true
```

### Step 3: Analysis
- **Complexity**: Cyclomatic, Cognitive
- **Duplication**: DRY violations
- **Naming**: Variable/function naming quality
- **Structure**: Function length, nesting depth, parameter count

## Detection Rules

### Code Smells
| Smell | Threshold | Severity |
|-------|-----------|----------|
| Long Function | > 50 lines | MEDIUM |
| Deep Nesting | > 3 levels | HIGH |
| Magic Numbers | Hardcoded numbers | LOW |
| Long Parameter List | > 4 params | MEDIUM |
| God Object | > 20 methods | HIGH |
| Duplicate Code | > 10 lines | HIGH |

### Naming Conventions
| Type | Pattern |
|------|---------|
| Function | camelCase, verb prefix |
| Variable | camelCase, noun |
| Constant | UPPER_SNAKE_CASE |
| Class | PascalCase |
| File | kebab-case or PascalCase |

## Response Template
```
## Code Review Results

**Target**: [path]

### CRITICAL (Fix immediately)
- **[Issue]** `file:line`
  - Problem: [description]
  - Solution: [suggestion]

### HIGH | MEDIUM | LOW
- ...

### Positive Patterns
- [Well-written code mentions]

### Summary
- Total issues: X
- Critical: X | High: X | Medium: X | Low: X
```

## Best Practices
1. Provide specific, actionable feedback with solutions
2. Group by severity: Critical > High > Medium > Low
3. Include positive feedback
4. Provide copy-paste ready code fixes
5. Respect project conventions

## Integration
- `/analyze-code` command
- `security-scanner` skill
- `perf-analyzer` skill

## Notes
- Reviews are suggestions, final decisions are developer's
- Maintain consistency with existing project conventions
- Use with automated linters (ESLint, Prettier)

Overview

This skill performs automated code reviews to identify code smells, measure complexity, and recommend targeted refactors. It produces severity-ranked findings and practical remediation suggestions to improve readability, maintainability, and structure. Use it as a developer-facing reviewer that complements linters and CI checks.

How this skill works

You specify the review scope (diff, file, folder, or full project) and the focus areas (quality, bugs, style, performance, security). The analyzer computes metrics like cyclomatic and cognitive complexity, detects duplication and naming issues, and flags smells by configured thresholds. Results are grouped by severity with concrete fixes and copy-paste snippets where appropriate.

When to use it

  • Preparing a pull request for pre-merge review
  • Post-implementation quality check across new code
  • Detecting code smells before a refactor sprint
  • Validating maintainability of a module or function
  • Assessing recent commits for regressions in code quality

Best practices

  • Run on the diff for faster, targeted feedback; run full scans periodically
  • Select multiple focus areas (quality, readability, bugs) for comprehensive results
  • Treat findings as actionable suggestions; prioritize by severity
  • Provide minimal, copy-paste fixes for medium/low issues and detailed steps for critical items
  • Combine with existing linters and project conventions to avoid noisy reports

Example use cases

  • Scan a feature branch diff to catch deep nesting and long functions before merge
  • Run a full project scan to find duplicated logic and god objects prior to refactoring
  • Review a critical module for naming, parameter count, and complexity before release
  • Validate that new code follows naming and file-structure conventions
  • Produce a severity-sorted report for a code review meeting or ticket backlog

FAQ

Can I limit the review to changed files only?

Yes. Choose the 'current changes' or git diff option to focus the review on modified files.

How are severities determined?

Findings use predefined thresholds (e.g., deep nesting >3 levels is HIGH). Rules map to Critical, High, Medium, or Low based on impact and configured limits.