home / skills / mastra-ai / mastra / performance-review

This skill performs targeted performance-focused code reviews, identifying bottlenecks across databases, memory, rendering, APIs, and algorithms to guide

npx playbooks add skill mastra-ai/mastra --skill performance-review

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
1.9 KB
---
name: performance-review
description: Performance-focused code review for identifying bottlenecks and optimization opportunities
version: 1.0.0
metadata:
  tags:
    - code-review
    - performance
---

# Performance Review

When reviewing code for performance issues, check each category below. Reference the detailed checklist in `references/performance-checklist.md`.

## Database & Queries

- N+1 query patterns (queries inside loops)
- Missing database indexes for frequently queried fields
- Unbounded queries without LIMIT/pagination
- SELECT \* instead of selecting only needed columns
- Missing connection pooling

## Memory & Resources

- Memory leaks: event listeners not removed, intervals not cleared, growing caches without bounds
- Large objects held in memory unnecessarily
- Unbounded arrays or maps that grow with usage
- Missing cleanup in component unmount/destroy lifecycle

## Rendering (Frontend)

- Unnecessary re-renders (missing React.memo, useMemo, useCallback where appropriate)
- Large component trees re-rendering for small state changes
- Missing virtualization for long lists
- Synchronous heavy computation blocking the main thread
- Large bundle sizes from unnecessary imports

## API & Network

- Missing caching for frequently accessed, rarely changing data
- Sequential API calls that could be parallelized
- Missing pagination for large data sets
- Over-fetching data (requesting more than needed)
- Missing request deduplication

## Algorithmic Complexity

- O(n²) or worse operations on potentially large datasets
- Repeated computation that could be memoized
- String concatenation in loops (use array join or template literals)
- Unnecessary sorting or filtering passes

## Severity Levels

- 🔴 **CRITICAL**: Will cause performance degradation under normal load
- 🟠 **HIGH**: Will cause issues at scale
- 🟡 **MEDIUM**: Optimization opportunity with measurable impact
- 🔵 **LOW**: Minor optimization suggestion

Overview

This skill performs performance-focused code reviews to identify runtime bottlenecks, inefficient patterns, and optimization opportunities across backend and frontend code. It surfaces issues in database queries, memory usage, rendering, network behavior, and algorithmic complexity, and classifies findings by severity. The goal is actionable recommendations that reduce latency, memory footprint, and CPU waste while improving scalability.

How this skill works

The review inspects code for common anti-patterns such as N+1 queries, unbounded memory growth, unnecessary re-renders, and high-complexity algorithms. It highlights missing safeguards like indexing, pagination, caching, connection pooling, and request deduplication. Each finding includes a severity level and concrete remediation steps (e.g., add indexes, introduce memoization, batch queries, or virtualize long lists).

When to use it

  • Before a major release to catch regressions that impact performance
  • When load testing reveals latency or memory spikes
  • During architecture or design reviews for new features handling large data
  • When build sizes or render times grow unexpectedly
  • As part of post-incident analysis for production performance incidents

Best practices

  • Prioritize fixes by severity and measurable impact (start with critical and high items)
  • Add benchmarks or load tests to verify improvements and avoid regressions
  • Introduce limits: pagination, connection pools, timeouts, and bounded caches
  • Avoid premature optimization; focus on hotspots and reproducible bottlenecks
  • Document changes and add monitoring (metrics/tracing) to validate real-world behavior

Example use cases

  • Detecting and replacing N+1 database patterns with batched joins or eager loading
  • Identifying unbounded in-memory caches and switching to size-limited LRU caches
  • Replacing costly synchronous computations in render paths with web workers or memoization
  • Adding pagination and caching for endpoints that previously returned huge payloads
  • Refactoring O(n²) algorithms to linear or n log n implementations for large datasets

FAQ

What severity levels mean and how to prioritize them?

Critical issues break performance under normal load and should be fixed first, followed by high issues that appear at scale. Medium and low items improve efficiency but can be scheduled after higher-priority fixes.

How do I prove an optimization worked?

Use benchmarks, load tests, and production metrics/tracing before and after changes to quantify latency, throughput, and memory improvements.

Can this review be applied to frontend and backend code?

Yes. The checklist covers database queries, API patterns, memory/resource usage, rendering issues, and algorithmic complexity across both client and server code.