home / skills / simhacker / moollm / code-review

code-review skill

/skills/code-review

This skill performs structured code analysis with evidence collection to identify issues and generate actionable review reports.

npx playbooks add skill simhacker/moollm --skill code-review

Review the files below or copy the command above to add this skill to your agents.

Files (7)
SKILL.md
3.6 KB
---
name: code-review
description: Systematic code analysis with evidence collection
allowed-tools:
  - read_file
  - run_terminal_cmd
tier: 2
protocol: CODE-REVIEW
tags: [moollm, development, quality, review, security]
related: [adventure, debugging, research-notebook, session-log, rubric, evaluator]
templates:
  - file: REVIEW.yml.tmpl
    purpose: Structured review tracking
    parameters: [review_name, created_date, focus, files]
  - file: REVIEW.md.tmpl
    purpose: Formatted review document
    parameters: [review_name, created_date, files]
---

# Code Review

> *"Read with intent. Question with purpose. Document with care."*

Systematic code analysis with evidence collection. Code review IS an [adventure](../adventure/) — the codebase is the dungeon, findings are clues.

## Review Process

```
READ → NOTE ISSUES → CLASSIFY → REPORT
```

### Step 1: Setup
1. Create REVIEW.yml
2. Identify files to review
3. Define focus areas

### Step 2: Overview
1. List all changed files
2. Read PR/commit description
3. Note initial impressions

### Step 3: Deep Review
For each file:
1. Read the code
2. Check against criteria
3. Note findings
4. Run relevant checks

### Step 4: Verification
1. Run tests
2. Run linters
3. Check regressions

### Step 5: Synthesize
1. Compile findings
2. Prioritize issues
3. Generate REVIEW.md
4. State recommendation

## Finding Severity

| Level | Symbol | Meaning | Action |
|-------|--------|---------|--------|
| Blocking | 🚫 | Must fix before merge | Request changes |
| Important | ⚠️ | Should fix or explain | Request changes |
| Minor | 💡 | Nice to fix | Comment only |
| Praise | 🎉 | Good work! | Celebrate |

## Finding Types

- **Security** — Injection, auth, sensitive data
- **Correctness** — Logic errors, edge cases
- **Performance** — N+1 queries, memory leaks
- **Maintainability** — Clarity, DRY, naming
- **Style** — Formatting, conventions

## Review Checklist

### Security
- Input validation
- Output encoding
- Authentication/authorization
- Sensitive data handling
- Injection vulnerabilities
- Timing attacks

### Correctness
- Logic errors
- Edge cases handled
- Null/undefined handling
- Error handling
- Race conditions
- Resource cleanup

### Maintainability
- Code clarity
- Appropriate comments
- Consistent naming
- DRY (no duplication)
- Single responsibility
- Testability

### Performance
- Algorithmic complexity
- Memory usage
- Database queries
- Caching
- Unnecessary operations

## Core Files

### REVIEW.yml

```yaml
review:
  name: "PR #123: Add user authentication"
  status: "in_progress"
  
findings:
  blocking:
    - id: "B1"
      file: "src/auth/login.ts"
      line: 45
      type: "security"
      summary: "Timing attack vulnerability"
      
  important: []
  minor: []
  praise: []

verification:
  tests: { ran: true, passed: true }
  linter: { ran: true, passed: false, issues: 3 }
```

### REVIEW.md

Formatted document with:
- Summary and counts
- Issues by severity
- Verification results
- Recommendation

## Verification Commands

```yaml
tests:
  - "npm test"
  - "pytest"
  - "go test ./..."
  
linters:
  - "npm run lint"
  - "flake8"
  - "golangci-lint run"
```

## Recommendation Output

| Outcome | Meaning |
|---------|---------|
| `approve` | Good to merge |
| `request_changes` | Has blocking/important issues |
| `comment` | Minor feedback only |

## See Also

- [rubric](../rubric/) — **Explicit scoring criteria** for code quality
- [evaluator](../evaluator/) — Independent assessment pattern
- [adversarial-committee](../adversarial-committee/) — Multiple reviewers debating findings

Overview

This skill performs systematic code analysis with structured evidence collection and clear recommendations. It guides reviewers through setup, deep inspection, verification, and synthesis to produce a reproducible REVIEW.md and machine-readable REVIEW.yml. The process emphasizes severity-tagged findings and actionable outcomes to speed decision-making.

How this skill works

The skill inspects changed files, PR or commit descriptions, and predefined focus areas, then reads each file against a checklist covering security, correctness, performance, maintainability, and style. Findings are recorded with severity (blocking, important, minor, praise), classified by type, and verified by running tests and linters. Results are synthesized into a prioritized report and a recommendation: approve, request_changes, or comment.

When to use it

  • When reviewing pull requests or commits that introduce new features or critical fixes
  • Before merging high-risk changes that touch authentication, authorization, or sensitive data
  • During release candidate validation to catch regressions and performance issues
  • When onboarding new team reviewers to enforce consistent review quality
  • For periodic audits of legacy code to identify maintainability debt

Best practices

  • Create a REVIEW.yml at the start to record scope, files, and status
  • Limit each review to defined focus areas to avoid scope creep
  • Record findings with evidence: file, line, type, and a short reproducible note
  • Prioritize blocking issues first and group similar findings to reduce noise
  • Run tests and linters early; verify fixes before closing the review

Example use cases

  • Security review for a new authentication flow to detect injection or timing attacks
  • Performance check for database-heavy changes to identify N+1 queries
  • Correctness review for edge-case handling in business logic
  • Maintainability pass to consolidate duplicated code and improve naming
  • Pre-merge verification that runs unit tests, linters, and documents failing checks

FAQ

How are severities assigned?

Severities map to action: blocking (must fix before merge), important (should fix or explain), minor (nice to fix), and praise (positive notes).

What output formats are produced?

The workflow produces a human-readable REVIEW.md summarizing findings and a machine-readable REVIEW.yml capturing structured data and verification results.

Which verification steps are required?

Recommended verification includes running the project's test suite and linters; commands vary by stack (npm test, pytest, go test, flake8, etc.).