home / skills / proffesor-for-testing / agentic-qe / code-review-quality

code-review-quality skill

/v3/assets/skills/code-review-quality

This skill guides context-driven code reviews to improve quality, testability, and maintainability with fast, blocker-first feedback.

npx playbooks add skill proffesor-for-testing/agentic-qe --skill code-review-quality

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
6.4 KB
---
name: code-review-quality
description: "Conduct context-driven code reviews focusing on quality, testability, and maintainability. Use when reviewing code, providing feedback, or establishing review practices."
category: development-practices
priority: high
tokenEstimate: 900
agents: [qe-quality-analyzer, qe-security-scanner, qe-performance-tester, qe-coverage-analyzer]
implementation_status: optimized
optimization_version: 1.0
last_optimized: 2025-12-02
dependencies: []
quick_reference_card: true
tags: [code-review, feedback, quality, testability, maintainability, pr-review]
---

# Code Review Quality

<default_to_action>
When reviewing code or establishing review practices:
1. PRIORITIZE feedback: πŸ”΄ Blocker (must fix) β†’ 🟑 Major β†’ 🟒 Minor β†’ πŸ’‘ Suggestion
2. FOCUS on: Bugs, security, testability, maintainability (not style preferences)
3. ASK questions over commands: "Have you considered...?" > "Change this to..."
4. PROVIDE context: Why this matters, not just what to change
5. LIMIT scope: Review < 400 lines at a time for effectiveness

**Quick Review Checklist:**
- Logic: Does it work correctly? Edge cases handled?
- Security: Input validation? Auth checks? Injection risks?
- Testability: Can this be tested? Is it tested?
- Maintainability: Clear naming? Single responsibility? DRY?
- Performance: O(nΒ²) loops? N+1 queries? Memory leaks?

**Critical Success Factors:**
- Review the code, not the person
- Catching bugs > nitpicking style
- Fast feedback (< 24h) > thorough feedback
</default_to_action>

## Quick Reference Card

### When to Use
- PR code reviews
- Pair programming feedback
- Establishing team review standards
- Mentoring developers

### Feedback Priority Levels
| Level | Icon | Meaning | Action |
|-------|------|---------|--------|
| Blocker | πŸ”΄ | Bug/security/crash | Must fix before merge |
| Major | 🟑 | Logic issue/test gap | Should fix before merge |
| Minor | 🟒 | Style/naming | Nice to fix |
| Suggestion | πŸ’‘ | Alternative approach | Consider for future |

### Review Scope Limits
| Lines Changed | Recommendation |
|---------------|----------------|
| < 200 | Single review session |
| 200-400 | Review in chunks |
| > 400 | Request PR split |

### What to Focus On
| βœ… Review | ❌ Skip |
|-----------|---------|
| Logic correctness | Formatting (use linter) |
| Security risks | Naming preferences |
| Test coverage | Architecture debates |
| Performance issues | Style opinions |
| Error handling | Trivial changes |

---

## Feedback Templates

### Blocker (Must Fix)
```markdown
πŸ”΄ **BLOCKER: SQL Injection Risk**

This query is vulnerable to SQL injection:
```javascript
db.query(`SELECT * FROM users WHERE id = ${userId}`)
```

**Fix:** Use parameterized queries:
```javascript
db.query('SELECT * FROM users WHERE id = ?', [userId])
```

**Why:** User input directly in SQL allows attackers to execute arbitrary queries.
```

### Major (Should Fix)
```markdown
🟑 **MAJOR: Missing Error Handling**

What happens if `fetchUser()` throws? The error bubbles up unhandled.

**Suggestion:** Add try/catch with appropriate error response:
```javascript
try {
  const user = await fetchUser(id);
  return user;
} catch (error) {
  logger.error('Failed to fetch user', { id, error });
  throw new NotFoundError('User not found');
}
```
```

### Minor (Nice to Fix)
```markdown
🟒 **minor:** Variable name could be clearer

`d` doesn't convey meaning. Consider `daysSinceLastLogin`.
```

### Suggestion (Consider)
```markdown
πŸ’‘ **suggestion:** Consider extracting this to a helper

This validation logic appears in 3 places. A `validateEmail()` helper would reduce duplication. Not blocking, but might be worth a follow-up PR.
```

---

## Review Questions to Ask

### Logic
- What happens when X is null/empty/negative?
- Is there a race condition here?
- What if the API call fails?

### Security
- Is user input validated/sanitized?
- Are auth checks in place?
- Any secrets or PII exposed?

### Testability
- How would you test this?
- Are dependencies injectable?
- Is there a test for the happy path? Edge cases?

### Maintainability
- Will the next developer understand this?
- Is this doing too many things?
- Is there duplication we could reduce?

---

## Agent-Assisted Reviews

```typescript
// Comprehensive code review
await Task("Code Review", {
  prNumber: 123,
  checks: ['security', 'performance', 'testability', 'maintainability'],
  feedbackLevels: ['blocker', 'major', 'minor'],
  autoApprove: { maxBlockers: 0, maxMajor: 2 }
}, "qe-quality-analyzer");

// Security-focused review
await Task("Security Review", {
  prFiles: changedFiles,
  scanTypes: ['injection', 'auth', 'secrets', 'dependencies']
}, "qe-security-scanner");

// Test coverage review
await Task("Coverage Review", {
  prNumber: 123,
  requireNewTests: true,
  minCoverageDelta: 0
}, "qe-coverage-analyzer");
```

---

## Agent Coordination Hints

### Memory Namespace
```
aqe/code-review/
β”œβ”€β”€ review-history/*     - Past review decisions
β”œβ”€β”€ patterns/*           - Common issues by team/repo
β”œβ”€β”€ feedback-templates/* - Reusable feedback
└── metrics/*            - Review turnaround time
```

### Fleet Coordination
```typescript
const reviewFleet = await FleetManager.coordinate({
  strategy: 'code-review',
  agents: [
    'qe-quality-analyzer',    // Logic, maintainability
    'qe-security-scanner',    // Security risks
    'qe-performance-tester',  // Performance issues
    'qe-coverage-analyzer'    // Test coverage
  ],
  topology: 'parallel'
});
```

---

## Review Etiquette

| βœ… Do | ❌ Don't |
|-------|---------|
| "Have you considered...?" | "This is wrong" |
| Explain why it matters | Just say "fix this" |
| Acknowledge good code | Only point out negatives |
| Suggest, don't demand | Be condescending |
| Review < 400 lines | Review 2000 lines at once |

---

## Related Skills
- [agentic-quality-engineering](../agentic-quality-engineering/) - Agent coordination
- [security-testing](../security-testing/) - Security review depth
- [refactoring-patterns](../refactoring-patterns/) - Maintainability patterns

---

## Remember

**Prioritize feedback:** πŸ”΄ Blocker β†’ 🟑 Major β†’ 🟒 Minor β†’ πŸ’‘ Suggestion. Focus on bugs and security, not style. Ask questions, don't command. Review < 400 lines at a time. Fast feedback (< 24h) beats thorough feedback.

**With Agents:** Agents automate security, performance, and coverage checks, freeing human reviewers to focus on logic and design. Use agents for consistent, fast initial review.

Overview

This skill conducts context-driven code reviews that prioritize quality, testability, and maintainability. It combines clear feedback levels (blocker β†’ major β†’ minor β†’ suggestion) with focused checklists to surface bugs, security risks, and test gaps. Use it to deliver fast, actionable reviews and to establish consistent team review practices.

How this skill works

The skill inspects code changes and applies a prioritized checklist: logic correctness, security, testability, maintainability, and performance. It frames feedback as questions and context-rich suggestions, classifying issues by impact and providing templates for blocker/major/minor feedback. It can coordinate specialized agents for automated security, performance, and coverage scans while leaving human reviewers to evaluate design and edge cases.

When to use it

  • Reviewing pull requests or code diffs (prefer < 400 lines per session)
  • Pair programming sessions where quick, constructive feedback is needed
  • Establishing or enforcing team code review standards and templates
  • Mentoring engineers on testability and maintainability best practices
  • Running agent-assisted scans (security, performance, coverage) before manual review

Best practices

  • Prioritize issues: Blocker β†’ Major β†’ Minor β†’ Suggestion; focus on bugs and security first
  • Ask questions rather than issuing commands to encourage discussion
  • Provide context: explain why a change matters, not just what to change
  • Limit review scope to under 400 lines; request PR splits for larger changes
  • Aim for fast feedback (< 24 hours) and avoid personal critiquesβ€”review the code, not the author

Example use cases

  • Run a comprehensive review for a PR that includes logic, security, performance, and maintainability checks
  • Trigger a security-focused agent scan to flag injection, auth, secrets, and dependency risks
  • Evaluate test coverage changes and require new tests when a feature lacks coverage
  • Use feedback templates to file consistent blocker or major comments on critical issues
  • Coordinate a fleet of review agents to parallelize checks and surface automated findings before human review

FAQ

How should I handle stylistic issues?

Defer trivial formatting and style to linters and CI. Focus reviewer comments on bugs, security, test gaps, and naming that impacts clarity.

What if a PR is larger than 400 lines?

Request the author to split the PR into smaller, focused changes or review the PR in chunks to maintain effectiveness.