home / skills / eddiebe147 / claude-settings / performance-profiler

performance-profiler skill

/skills/performance-profiler

This skill profiles application performance, identifies bottlenecks, and guides targeted optimizations to speed up CPU, memory, and database queries.

npx playbooks add skill eddiebe147/claude-settings --skill performance-profiler

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
1.3 KB
---
name: Performance Profiler
slug: performance-profiler
description: Profile application performance, identify bottlenecks, and optimize for speed
category: technical
complexity: advanced
version: "1.0.0"
author: "ID8Labs"
triggers:
  - "profile performance"
  - "find bottlenecks"
  - "optimize speed"
tags:
  - performance
  - profiling
  - optimization
---

# Performance Profiler

Identify and eliminate performance bottlenecks. From CPU profiling to database query optimization, systematically improve application speed and efficiency.

## Core Workflows

### Workflow 1: Application Profiling
1. **Baseline** - Establish current performance metrics
2. **Profiling** - Run CPU, memory, and I/O profilers
3. **Hotspot Analysis** - Identify slow code paths
4. **Optimization** - Implement targeted improvements
5. **Verification** - Measure improvement

### Workflow 2: Database Optimization
1. **Query Analysis** - Identify slow queries
2. **Explain Plans** - Analyze query execution
3. **Index Review** - Optimize indexes
4. **Query Rewriting** - Improve query structure
5. **Connection Pooling** - Optimize connections

## Quick Reference

| Action | Command |
|--------|---------|
| Profile app | "Profile [application] performance" |
| Find bottlenecks | "Identify performance bottlenecks" |
| Optimize queries | "Optimize slow database queries" |

Overview

This skill profiles application performance to find and remove bottlenecks across CPU, memory, I/O, and database layers. It guides a structured workflow from establishing a baseline to verifying improvements, helping teams produce faster, more reliable software. Practical steps cover both application-level hotspots and database query optimization for end-to-end speed gains.

How this skill works

The skill runs targeted profilers and diagnostics to collect metrics for CPU, memory, and I/O usage, plus database query execution details. It analyzes hotspots and slow queries, recommends optimizations such as code changes, index adjustments, or connection pool tuning, and then reruns measurements to verify impact. Commands and workflows are organized so you can repeat profiling, optimize, and verify results reliably.

When to use it

  • Before major releases to ensure performance regressions are not introduced
  • When users report slow responses or high latency in production
  • During load tests to identify scaling limits and hotspots
  • When database queries or transaction times spike
  • When reducing infrastructure costs by improving efficiency

Best practices

  • Establish a clear baseline of performance metrics before changes
  • Profile in environments that mirror production for realistic results
  • Target the highest-impact hotspots first using Pareto principles
  • Combine application and database profiling for complete root-cause analysis
  • Measure after every change to confirm actual improvement

Example use cases

  • Detecting a CPU-bound loop causing request queue buildup and fixing it to reduce latency
  • Profiling memory leaks during long-running jobs and implementing fixes to lower memory usage
  • Analyzing slow SQL queries, adding appropriate indexes, and rewriting queries to cut response times
  • Tuning connection pooling to prevent database contention under peak load
  • Validating that a caching layer delivers expected throughput improvements after deployment

FAQ

How long does a full profiling cycle take?

A baseline plus profiling and initial analysis often takes a few hours; full optimization and verification can take days depending on complexity.

Do I need production data to profile effectively?

Production-like load is ideal. Use representative datasets and traffic patterns to get accurate results while protecting sensitive data.