home / skills / yelmuratoff / agent_sync / code-review

code-review skill

/.ai/src/skills/code-review

This skill guides authors and reviewers through a structured code-review workflow to improve correctness, maintainability, and security.

npx playbooks add skill yelmuratoff/agent_sync --skill code-review

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
1.6 KB
---
name: code-review
description: When preparing a PR for review or performing a structured review focused on correctness, risk, and maintainability.
---

# Code Review (Author + Reviewer Workflow)

## When to use

- Opening a pull request.
- Reviewing a teammate's or AI-generated changes.
- Validating merge readiness for medium/large changes.

## Steps

### 1) Prepare review-friendly PRs (author)

- Keep changes focused on a single objective.
- Separate unrelated refactors from behavioral changes.
- Include concise context: what changed, why, and how it was verified.

### 2) Run automation first

- Ensure local format/analyze/test checks pass before requesting review.
- Treat failing automation as a blocker, not reviewer work.

### 3) Review by risk, not by file order (reviewer)

- Start with architecture boundaries and dependency direction.
- Check correctness, error handling, and state transitions.
- Verify security/privacy risks (secrets, PII logging, untrusted input paths).

### 4) Validate tests and observability

- Confirm changed behavior is covered by meaningful tests.
- Ensure error paths are tested for critical flows.
- Check logging/analytics changes are safe and intentional.

### 5) Give actionable feedback

- Describe the concrete issue, impact, and expected fix direction.
- Distinguish blocking issues from optional improvements.
- Prefer precise suggestions over broad or stylistic comments.

### 6) Close review cleanly

- Resolve all blocking comments before merge.
- Re-run relevant checks after follow-up commits.
- Merge only when intent, behavior, and verification are all clear.

Overview

This skill helps authors and reviewers prepare and perform structured code reviews focused on correctness, risk, and maintainability. It codifies a workflow for creating review-friendly PRs, running automation, risk-based review, validating tests and observability, and providing actionable feedback. Use it to reduce review time and increase merge confidence.

How this skill works

Authors keep changes small, document intent, and run local checks before requesting review. Reviewers assess changes by risk and architecture boundaries, verify correctness and error handling, and confirm tests and observability cover critical paths. The workflow enforces resolving blocking issues and re-running checks before merge.

When to use it

  • Opening a pull request for a feature, bugfix, or refactor
  • Reviewing teammate or AI-generated code changes
  • Validating merge readiness for medium or large changes
  • Ensuring safety for security, privacy, or critical flows
  • Before merging changes that affect architecture or dependencies

Best practices

  • Keep PRs focused on a single objective and separate unrelated refactors
  • Include concise context: what changed, why, and how it was verified
  • Run formatting, static analysis, and tests locally; treat failing automation as a blocker
  • Review by risk: start at architecture boundaries and validate dependency direction
  • Give actionable feedback with impact and suggested fixes; mark blockers clearly
  • Resolve blocking comments and re-run relevant checks before merging

Example use cases

  • Author preparing a PR that modifies data flow between services and needs reviewer attention on boundaries
  • Reviewer validating error handling and state transitions after a feature change
  • Team ensuring no secrets or PII leaks before merging telemetry or logging updates
  • Confirming tests cover both success and error paths for a payment flow
  • Cleaning up an AI-generated refactor by separating behavior changes from formatting

FAQ

What counts as a blocking issue?

Any problem that affects correctness, security, privacy, or testability should be blocking until fixed and re-verified.

How do I handle large changes?

Split into smaller PRs by objective, review architecture and dependency direction first, and use feature flags where appropriate.