home / skills / skillcreatorai / ai-agent-skills / code-review

code-review skill

/skills/code-review

This skill automates thorough code reviews for pull requests, improving security, performance, and quality across changes.

npx playbooks add skill skillcreatorai/ai-agent-skills --skill code-review

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.7 KB
---
name: code-review
description: Automated code review for pull requests using specialized review patterns. Analyzes code for quality, security, performance, and best practices. Use when reviewing code changes, PRs, or doing code audits.
source: anthropics/claude-code
license: Apache-2.0
---

# Code Review

## Review Categories

### 1. Security Review
Check for:
- SQL injection vulnerabilities
- XSS (Cross-Site Scripting)
- Command injection
- Insecure deserialization
- Hardcoded secrets/credentials
- Improper authentication/authorization
- Insecure direct object references

### 2. Performance Review
Check for:
- N+1 queries
- Missing database indexes
- Unnecessary re-renders (React)
- Memory leaks
- Blocking operations in async code
- Missing caching opportunities
- Large bundle sizes

### 3. Code Quality Review
Check for:
- Code duplication (DRY violations)
- Functions doing too much (SRP violations)
- Deep nesting / complex conditionals
- Magic numbers/strings
- Poor naming
- Missing error handling
- Incomplete type coverage

### 4. Testing Review
Check for:
- Missing test coverage for new code
- Tests that don't test behavior
- Flaky test patterns
- Missing edge cases
- Mocked external dependencies

## Review Output Format

```markdown
## Code Review Summary

### 🔴 Critical (Must Fix)
- **[File:Line]** [Issue description]
  - **Why:** [Explanation]
  - **Fix:** [Suggested fix]

### 🟡 Suggestions (Should Consider)
- **[File:Line]** [Issue description]
  - **Why:** [Explanation]
  - **Fix:** [Suggested fix]

### 🟢 Nits (Optional)
- **[File:Line]** [Minor suggestion]

### ✅ What's Good
- [Positive feedback on good patterns]
```

## Common Patterns to Flag

### Security
```javascript
// BAD: SQL injection
const query = `SELECT * FROM users WHERE id = ${userId}`;

// GOOD: Parameterized query
const query = 'SELECT * FROM users WHERE id = $1';
await db.query(query, [userId]);
```

### Performance
```javascript
// BAD: N+1 query
users.forEach(async user => {
  const posts = await getPosts(user.id);
});

// GOOD: Batch query
const userIds = users.map(u => u.id);
const posts = await getPostsForUsers(userIds);
```

### Error Handling
```javascript
// BAD: Swallowing errors
try {
  await riskyOperation();
} catch (e) {}

// GOOD: Handle or propagate
try {
  await riskyOperation();
} catch (e) {
  logger.error('Operation failed', { error: e });
  throw new AppError('Operation failed', { cause: e });
}
```

## Review Checklist

- [ ] No hardcoded secrets
- [ ] Input validation present
- [ ] Error handling complete
- [ ] Types/interfaces defined
- [ ] Tests added for new code
- [ ] No obvious performance issues
- [ ] Code is readable and documented
- [ ] Breaking changes documented

Overview

This skill performs automated code reviews for pull requests using specialized review patterns focused on security, performance, quality, and testing. It produces structured, actionable feedback with prioritized findings and suggested fixes. Use it to accelerate reviews, enforce standards, and reduce regressions before merging.

How this skill works

The skill analyzes diff contents and inspects files for known anti-patterns: injection risks, inefficient queries, missing indexes, and poor error handling. It classifies findings into Critical, Suggestions, and Nits and outputs concise rationale and recommended fixes for each issue. A checklist and positives section help maintainers track required changes and recognize good patterns.

When to use it

  • Reviewing pull requests or feature branches before merge
  • Performing security-focused code audits or threat-modeling reviews
  • Evaluating performance regressions after changes
  • Validating new code for testing and type coverage
  • Onboarding contributors to reinforce coding standards

Best practices

  • Prioritize fixes labeled Critical; these usually block merging
  • Include minimal code snippets for reproducing and fixing issues
  • Reference specific lines/files so developers can act quickly
  • Combine automated findings with a short human review for context
  • Add tests and update documentation when fixing behavior-impacting issues

Example use cases

  • Find and fix SQL/command injection and hardcoded secrets in a PR
  • Detect N+1 queries and suggest batching or eager loading changes
  • Identify missing tests for new logic and recommend cases to add
  • Flag swallowed errors and propose proper logging or rethrowing
  • Spot React unnecessary re-renders or large bundle artifacts and suggest improvements

FAQ

How are issues prioritized?

Findings are grouped as Critical, Suggestions, and Nits. Critical items indicate security or correctness problems that should block merge; Suggestions help performance or maintainability; Nits are optional style improvements.

Can it detect secrets and vulnerable dependencies?

It flags likely hardcoded secrets and insecure patterns in code. Dependency vulnerability scanning is recommended as a complementary step using dedicated scanners.