home / skills / shotaiuchi / dotclaude / review-performance

review-performance skill

/dotclaude/skills/review-performance

This skill performs a performance-focused code review, identifying N+1 queries, wasted renders, memory leaks, and inefficient patterns to optimize apps.

npx playbooks add skill shotaiuchi/dotclaude --skill review-performance

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
1.7 KB
---
name: review-performance
description: >-
  Performance-focused code review. Apply when reviewing code for
  N+1 queries, unnecessary re-renders, memory leaks, inefficient algorithms,
  database access patterns, caching, and resource optimization.
user-invocable: false
---

# Performance Review

Review code from a performance perspective.

## Review Checklist

### Database & Queries
- Check for N+1 query problems
- Verify proper use of indexes
- Look for unnecessary data fetching (SELECT *)
- Check batch operations vs individual queries
- Verify connection pool configuration

### Memory & Resources
- Check for memory leaks (unclosed resources, retained references)
- Verify proper cleanup in lifecycle methods
- Look for unnecessary object creation in hot paths
- Check for unbounded collections or caches

### Algorithm & Data Structures
- Verify appropriate time/space complexity
- Check for unnecessary nested loops
- Look for redundant computation that could be cached
- Verify efficient use of data structures

### UI & Rendering
- Check for unnecessary re-renders or recompositions
- Verify lazy loading for large lists
- Look for blocking operations on main/UI thread
- Check image loading and caching strategy

### Network & I/O
- Verify proper use of async/concurrent operations
- Check for unnecessary API calls
- Look for missing pagination
- Verify timeout and retry configurations

## Output Format

Report findings with impact ratings:

| Impact | Description |
|--------|-------------|
| Critical | Causes visible degradation or crashes under load |
| High | Noticeable impact on user experience |
| Medium | Measurable but not immediately visible |
| Low | Micro-optimization, minor improvement |

Overview

This skill performs performance-focused code reviews to identify hotspots and inefficiencies that impact scalability, latency, and memory. It targets database access patterns, CPU and memory usage, rendering behavior, and I/O concurrency to produce actionable findings with impact ratings.

How this skill works

I inspect code for common performance anti-patterns: N+1 queries, inefficient algorithms, unbounded memory usage, unnecessary re-renders, blocking UI operations, and poor network handling. Findings are reported with concise descriptions, the likely impact (Critical/High/Medium/Low), and concrete remediation suggestions or examples.

When to use it

  • Before a release that must meet latency or scalability targets
  • When investigating increased CPU, memory, or DB load
  • During code review for backend endpoints, services, or data access layers
  • When UI responsiveness or rendering performance is degrading
  • While auditing third-party integrations or caching strategies

Best practices

  • Prioritize fixes by impact and frequency of the affected code path
  • Measure before and after changes using targeted benchmarks or traces
  • Prefer batching, pagination, and indexed queries for DB-heavy flows
  • Avoid allocating in hot loops; reuse buffers and clean up resources
  • Offload expensive work from the main/UI thread and add limits to caches

Example use cases

  • Detect and remediate an N+1 query in a list endpoint, replacing per-item queries with a single join or batch fetch
  • Find and fix repeated expensive renders by memoizing components or reducing prop churn
  • Identify unbounded growth in an in-memory cache and recommend eviction policies or size limits
  • Spot synchronous network calls on the UI thread and convert them to async with proper timeouts and retries
  • Replace O(n^2) nested loops with a hash-based lookup to reduce CPU cost on large inputs

FAQ

How are severity ratings assigned?

Severity is based on potential user impact and likelihood: Critical for crashes or severe degradation under load, High for noticeable UX impact, Medium for measurable inefficiencies, Low for micro-optimizations.

Do you provide code changes or just recommendations?

The review gives concrete remediation steps and example patterns; I can also suggest specific code edits when samples are available.