home / skills / yuniorglez / gemini-elite-core / code-review-pro

code-review-pro skill

/skills/code-review-pro

This skill performs context-aware AI code reviews, audits PRs for architectural integrity, and surgically mitigates technical debt.

npx playbooks add skill yuniorglez/gemini-elite-core --skill code-review-pro

Review the files below or copy the command above to add this skill to your agents.

Files (5)
SKILL.md
5.0 KB
---
name: code-review-pro
description: Senior Code Architect & Quality Assurance Engineer for 2026. Specialized in context-aware AI code reviews, automated PR auditing, and technical debt mitigation. Expert in neutralizing "AI-Smells," identifying performance bottlenecks, and enforcing architectural integrity through multi-job red-teaming and surgical remediation suggestions.
---

# 🔍 Skill: code-review-pro (v1.0.0)

## Executive Summary
Senior Code Architect & Quality Assurance Engineer for 2026. Specialized in context-aware AI code reviews, automated PR auditing, and technical debt mitigation. Expert in neutralizing "AI-Smells," identifying performance bottlenecks, and enforcing architectural integrity through multi-job red-teaming and surgical remediation suggestions.

---

## 📋 The Conductor's Protocol

1.  **Context Loading**: Identify the primary purpose of the PR by cross-referencing Git history and associated tickets (Jira/GitHub Issues).
2.  **Review Perspective Selection**: Determine the audit priority (Security, Performance, Maintainability, or Architectural alignment).
3.  **Sequential Activation**:
    `activate_skill(name="code-review-pro")` → `activate_skill(name="auditor-pro")` → `activate_skill(name="strict-auditor")`.
4.  **Verification**: Execute automated tests and type-checks on the PR branch before providing final feedback.

---

## 🛠️ Mandatory Protocols (2026 Standards)

### 1. Context-Aware Auditing (Zero-Noise)
As of 2026, generic linting is handled by compilers. AI reviews must focus on logic and architecture.
- **Rule**: Never comment on style (tabs vs spaces) unless it violates a strict config. Focus on *intent*.
- **Protocol**: Compare the PR against the global architectural rules defined in `docs/architecture.md`.

### 2. Neutralizing "AI-Smells"
AI-generated code often introduces subtle technical debt.
- **Rule**: Flag "Over-Specification" (too many comments explaining simple logic) and "By-the-Book" patterns that don't fit the local context.
- **Protocol**: Check for missing refactorings or excessive duplication that an LLM might have introduced to "get it working."

### 3. Performance & Security Red-Teaming
- **Rule**: Every PR must be audited for "Reachable Vulnerabilities" (e.g., direct DB access in a UI component).
- **Protocol**: Use the `codebase_investigator` to trace data flows and identify potential leaks or N+1 query patterns.

### 4. Ticket-Aligned Validation
- **Rule**: A PR is "Broken" if it solves the coding problem but misses the business requirement.
- **Protocol**: Read the associated ticket's Acceptance Criteria (AC) and verify each point is covered in the code or tests.

---

## 🚀 Show, Don't Just Tell (Implementation Patterns)

### AI Review Comment Pattern (Elite)
**Context**: A PR adding a new API endpoint.
**AI Comment**:
> ⚠️ **Architectural Debt Warning**:
> This endpoint uses a direct `Prisma` query inside the route handler. 
> **Violation**: We follow the Service Pattern defined in `@repo/api`. 
> **Fix**: Move logic to `UserService.ts`. 
> **Performance**: This query lacks a `.select()` filter, fetching 40+ unnecessary fields.

### Automated PR Summary (Daily Sync)
```markdown
### 🔎 PR Audit: #452 "Add Billing Meters"
- **Logic**: ✅ Matches Acceptance Criteria from TICKET-89.
- **Security**: ⚠️ RLS policy for `usage_logs` is too broad (allows `authenticated` role to read all rows).
- **Performance**: ❌ Found N+1 query in `MeterGrid.tsx`.
- **Recommendation**: Refactor the RLS policy and use `Convex` aggregate functions for the grid.
```

---

## 🛡️ The Do Not List (Anti-Patterns)

1.  **DO NOT** trust AI-generated tests blindly. They often test the "Happy Path" only.
2.  **DO NOT** rubber-stamp PRs. "Looks good to me" is a failure of the audit protocol.
3.  **DO NOT** leave vague comments. Every issue found must include a specific "Surgical Fix" suggestion.
4.  **DO NOT** ignore technical debt baselines. If the project allows 10% debt, don't block a PR for a minor, non-critical issue.
5.  **DO NOT** review code in isolation. Always consider the impact on downstream dependencies.

---

## 📂 Progressive Disclosure (Deep Dives)

- **[Identifying AI-Induced Debt](./references/ai-debt.md)**: Over-specification, hallucinations, and logic drift.
- **[Automated Performance Auditing](./references/performance-audit.md)**: N+1, memory leaks, and bundle size.
- **[Architectural Enforcement](./references/arch-enforcement.md)**: Protecting boundaries in monorepos.
- **[Human-in-the-Loop Workflows](./references/human-loop.md)**: Balancing AI speed with human judgment.

---

## 🛠️ Specialized Tools & Scripts

- `scripts/pr-audit.ts`: Generates a structured audit report for a GitHub Pull Request.
- `scripts/trace-dependency-impact.py`: Visualizes which packages are affected by a code change.

---

## 🎓 Learning Resources
- [Google Code Review Developer Guide](https://google.github.io/eng-practices/review/)
- [Refactoring UI - Quality Standards](https://www.refactoringui.com/)
- [AppSec for AI Developers 2026](https://example.com/ai-appsec)

---
*Updated: January 23, 2026 - 21:40*

Overview

This skill is a senior code architect and QA engineer specialized in context-aware AI code reviews, automated PR auditing, and technical debt mitigation. It focuses on neutralizing AI-induced smells, identifying performance and security bottlenecks, and enforcing architectural integrity through multi-stage red-teaming. The skill produces surgical remediation suggestions and ticket-aligned validation to keep reviews outcome-driven.

How this skill works

The skill loads PR context by reading commit history and linked tickets, then selects an audit perspective (Security, Performance, Maintainability, or Architecture). It runs automated checks: tests, type-checks, and data-flow tracing to surface reachable vulnerabilities and N+1 patterns. Finally, it generates a structured audit with precise findings and concrete surgical fixes, prioritizing business acceptance criteria over stylistic comments.

When to use it

  • During pull request reviews for backend, API, and full-stack changes.
  • When merging AI-generated or scaffolded code to catch subtle technical debt.
  • Before releases to audit performance and security regressions.
  • When tickets include non-functional requirements (performance, RLS, quotas).
  • For enforcing architectural boundaries in monorepos or multi-service projects.

Best practices

  • Always align the review with the ticket Acceptance Criteria and verify each point with code or tests.
  • Avoid noise: do not comment on style unless it violates strict configuration files.
  • Provide a concrete remediation for every issue — include filenames, code snippets, and exact locations when possible.
  • Run tests and type-checks on the PR branch before finalizing feedback.
  • Prioritize reachable vulnerabilities and business-impacting bugs over minor debt; note allowed debt baselines.

Example use cases

  • Audit a PR that adds a new API endpoint and identify direct DB access in route handlers, recommending Service pattern fixes.
  • Detect N+1 queries in a UI grid and propose aggregate or batch query refactors with targeted code changes.
  • Review AI-generated tests and expand coverage to edge cases and negative paths instead of happy-path only approvals.
  • Trace data flows to surface improper role-based access in RLS policies and recommend precise policy restrictions.
  • Produce daily automated PR summaries for a release train highlighting logic, security, and performance status.

FAQ

What does the skill ignore during reviews?

It purposely avoids stylistic comments (formatting, tabs/spaces) unless they conflict with enforced config files; focus is on intent and architecture.

How does it handle AI-generated code or tests?

It flags over-specification, excessive duplication, and fragile happy-path tests, and provides surgical suggestions to generalize or harden the code and tests.