home / skills / sammcj / agentic-coding / code-review

code-review skill

/Skills/code-review

This skill guides post-development code reviews by orchestrating parallel sub-agents to surface findings and enforce fixes.

npx playbooks add skill sammcj/agentic-coding --skill code-review

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
844 B
---
name: code-review
description: Use this skill after completing multiple, complex software development tasks before informing the user that work is complete.
---

# Guidelines For Performing Code Reviews After Completing Multiple Complex Software Development Tasks

1. Spawn parallel sub-agents with tasks to perform a critical self-review of the changes you've made.
2. Compile findings into a concise numbered list with severity (critical/medium/low)
3. Verify each finding against actual code (no false positives)
4. Implement all fixes and run the appropriate lint/test/build pipeline

## Sub Agent Guidelines

- Instruct sub-agents to keep outputs concise, token-efficient, relevant and actionable focused on your changes and not to nitpick on minor style issues.
- Appropriately scope the review to your changes with clear boundaries.

Overview

This skill performs a focused, agentic code review after completing multiple complex development tasks to ensure changes are correct and release-ready. It runs parallel sub-agents to self-review, compiles prioritized findings, verifies each issue against the real code, and applies fixes before declaring work complete.

How this skill works

After you finish multiple changes, the skill spawns parallel sub-agents that inspect only the scoped modifications and produce concise, actionable findings. It aggregates those findings into a numbered list with severity labels, verifies each reported issue against the actual code to avoid false positives, and then implements fixes and re-runs lint/test/build pipelines to confirm resolution.

When to use it

  • Before marking a batch of related changes as complete or merging to main.
  • After large refactors touching multiple modules or services.
  • Prior to a release or deploy to catch high-severity regressions early.
  • When multiple features or bugfixes land in the same PR or branch.
  • Before gating changes behind CI checks or code-ownership reviews.

Best practices

  • Scope each sub-agent to only the changed files or logical boundaries to avoid noise.
  • Instruct sub-agents to keep outputs concise and actionable—no broad stylistic nitpicks.
  • Label every finding with severity (critical/medium/low) and a one-line impact summary.
  • Always verify findings against the actual code to eliminate false positives.
  • Implement fixes and re-run the full lint/test/build pipeline before reporting completion.

Example use cases

  • A multi-module refactor where behavior changes might be scattered across packages.
  • Aggregating and validating fixes after a sprint that produced several interdependent PRs.
  • Preparing a release candidate by catching regressions introduced across feature branches.
  • Handling a backlog of bugfixes that touch shared utilities and require coordinated validation.
  • Gatekeeping CI failures by reproducing and resolving issues found by automated checks.

FAQ

How fast will the review run?

Speed depends on change size and test suite; parallel sub-agents speed up inspection but plan for full pipeline time when re-running builds and tests.

Can it auto-fix everything?

The skill auto-applies straightforward fixes (e.g., linting, small API adjustments) but will surface complex or design-impacting issues for human decision and review.