home / skills / jjuidev / jss / code-review
/.claude/skills/code-review
This skill enforces rigorous code-review practices, verifies completion claims with evidence, and coordinates subagent reviews to improve quality and
npx playbooks add skill jjuidev/jss --skill code-reviewReview the files below or copy the command above to add this skill to your agents.
---
name: code-review
description: Review code quality, receive feedback with technical rigor, verify completion claims. Use before PRs, after implementing features, when claiming task completion, for subagent reviews.
---
# Code Review
Guide proper code review practices emphasizing technical rigor, evidence-based claims, and verification over performative responses.
## Overview
Code review requires three distinct practices:
1. **Receiving feedback** - Technical evaluation over performative agreement
2. **Requesting reviews** - Systematic review via code-reviewer subagent
3. **Verification gates** - Evidence before any completion claims
Each practice has specific triggers and protocols detailed in reference files.
## Core Principle
Always honoring **YAGNI**, **KISS**, and **DRY** principles.
**Be honest, be brutal, straight to the point, and be concise.**
**Technical correctness over social comfort.** Verify before implementing. Ask before assuming. Evidence before claims.
## When to Use This Skill
### Receiving Feedback
Trigger when:
- Receiving code review comments from any source
- Feedback seems unclear or technically questionable
- Multiple review items need prioritization
- External reviewer lacks full context
- Suggestion conflicts with existing decisions
**Reference:** `references/code-review-reception.md`
### Requesting Review
Trigger when:
- Completing tasks in subagent-driven development (after EACH task)
- Finishing major features or refactors
- Before merging to main branch
- Stuck and need fresh perspective
- After fixing complex bugs
**Reference:** `references/requesting-code-review.md`
### Verification Gates
Trigger when:
- About to claim tests pass, build succeeds, or work is complete
- Before committing, pushing, or creating PRs
- Moving to next task
- Any statement suggesting success/completion
- Expressing satisfaction with work
**Reference:** `references/verification-before-completion.md`
## Quick Decision Tree
```
SITUATION?
│
├─ Received feedback
│ ├─ Unclear items? → STOP, ask for clarification first
│ ├─ From human partner? → Understand, then implement
│ └─ From external reviewer? → Verify technically before implementing
│
├─ Completed work
│ ├─ Major feature/task? → Request code-reviewer subagent review
│ └─ Before merge? → Request code-reviewer subagent review
│
└─ About to claim status
├─ Have fresh verification? → State claim WITH evidence
└─ No fresh verification? → RUN verification command first
```
## Receiving Feedback Protocol
### Response Pattern
READ → UNDERSTAND → VERIFY → EVALUATE → RESPOND → IMPLEMENT
### Key Rules
- ❌ No performative agreement: "You're absolutely right!", "Great point!", "Thanks for [anything]"
- ❌ No implementation before verification
- ✅ Restate requirement, ask questions, push back with technical reasoning, or just start working
- ✅ If unclear: STOP and ask for clarification on ALL unclear items first
- ✅ YAGNI check: grep for usage before implementing suggested "proper" features
### Source Handling
- **Human partner:** Trusted - implement after understanding, no performative agreement
- **External reviewers:** Verify technically correct, check for breakage, push back if wrong
**Full protocol:** `references/code-review-reception.md`
## Requesting Review Protocol
### When to Request
- After each task in subagent-driven development
- After major feature completion
- Before merge to main
### Process
1. Get git SHAs: `BASE_SHA=$(git rev-parse HEAD~1)` and `HEAD_SHA=$(git rev-parse HEAD)`
2. Dispatch code-reviewer subagent via Task tool with: WHAT_WAS_IMPLEMENTED, PLAN_OR_REQUIREMENTS, BASE_SHA, HEAD_SHA, DESCRIPTION
3. Act on feedback: Fix Critical immediately, Important before proceeding, note Minor for later
**Full protocol:** `references/requesting-code-review.md`
## Verification Gates Protocol
### The Iron Law
**NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE**
### Gate Function
IDENTIFY command → RUN full command → READ output → VERIFY confirms claim → THEN claim
Skip any step = lying, not verifying
### Requirements
- Tests pass: Test output shows 0 failures
- Build succeeds: Build command exit 0
- Bug fixed: Test original symptom passes
- Requirements met: Line-by-line checklist verified
### Red Flags - STOP
Using "should"/"probably"/"seems to", expressing satisfaction before verification, committing without verification, trusting agent reports, ANY wording implying success without running verification
**Full protocol:** `references/verification-before-completion.md`
## Integration with Workflows
- **Subagent-Driven:** Review after EACH task, verify before moving to next
- **Pull Requests:** Verify tests pass, request code-reviewer review before merge
- **General:** Apply verification gates before any status claims, push back on invalid feedback
## Bottom Line
1. Technical rigor over social performance - No performative agreement
2. Systematic review processes - Use code-reviewer subagent
3. Evidence before claims - Verification gates always
Verify. Question. Then implement. Evidence. Then claim.
This skill enforces rigorous, evidence-based code review practices for Python projects. It defines how to receive feedback, request systematic reviews, and enforce verification gates so completion claims are always backed by fresh proof. The goal is technical correctness, concise communication, and preventing premature or performative approvals.
When feedback arrives, follow a strict READ → UNDERSTAND → VERIFY → EVALUATE → RESPOND → IMPLEMENT flow that prioritizes verification before any change. For request-driven reviews, dispatch a code-reviewer subagent with the implemented change range and contextual details. Before any completion claim, run and record verification commands (tests, builds, or reproductions) and attach the output as evidence.
What counts as fresh verification evidence?
Run the actual command that demonstrates the claim (test runner, build command, or reproduction script) and attach the raw output showing success or failure.
How do I handle unclear reviewer comments?
Stop, restate the requirement, list all unclear items, and ask precise follow-up questions before implementing anything.