home / skills / jeremylongshore / claude-code-plugins-plus-skills / profiling-application-performance

This skill helps you profile application performance by identifying bottlenecks and recommending targeted optimizations across CPU, memory, and execution time.

npx playbooks add skill jeremylongshore/claude-code-plugins-plus-skills --skill profiling-application-performance

Review the files below or copy the command above to add this skill to your agents.

Files (7)
SKILL.md
3.3 KB
---
name: profiling-application-performance
description: |
  Execute this skill enables AI assistant to profile application performance, analyzing cpu usage, memory consumption, and execution time. it is triggered when the user requests performance analysis, bottleneck identification, or optimization recommendations. the... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.
allowed-tools: Read, Write, Edit, Grep, Glob, Bash(cmd:*)
version: 1.0.0
author: Jeremy Longshore <[email protected]>
license: MIT
---
# Application Profiler

This skill provides automated assistance for application profiler tasks.

## Overview

This skill empowers Claude to analyze application performance, pinpoint bottlenecks, and recommend optimizations. By leveraging the application-profiler plugin, it provides insights into CPU usage, memory allocation, and execution time, enabling targeted improvements.

## How It Works

1. **Identify Application Stack**: Determines the application's technology (e.g., Node.js, Python, Java).
2. **Locate Entry Points**: Identifies main application entry points and critical execution paths.
3. **Analyze Performance Metrics**: Examines CPU usage, memory allocation, and execution time to detect bottlenecks.
4. **Generate Profile**: Compiles the analysis into a comprehensive performance profile, highlighting areas for optimization.

## When to Use This Skill

This skill activates when you need to:
- Analyze application performance for bottlenecks.
- Identify CPU-intensive operations and memory leaks.
- Optimize application execution time.

## Examples

### Example 1: Identifying Memory Leaks

User request: "Analyze my Node.js application for memory leaks."

The skill will:
1. Activate the application-profiler plugin.
2. Analyze the application's memory allocation patterns.
3. Generate a profile highlighting potential memory leaks.

### Example 2: Optimizing CPU Usage

User request: "Profile my Python script and find the most CPU-intensive functions."

The skill will:
1. Activate the application-profiler plugin.
2. Analyze the script's CPU usage.
3. Generate a profile identifying the functions consuming the most CPU time.

## Best Practices

- **Code Instrumentation**: Ensure the application code is instrumented for accurate profiling.
- **Realistic Workloads**: Use realistic workloads during profiling to simulate real-world scenarios.
- **Iterative Optimization**: Apply optimizations iteratively and re-profile to measure improvements.

## Integration

This skill can be used in conjunction with code editing plugins to implement the recommended optimizations directly within the application's source code. It can also integrate with monitoring tools to track performance improvements over time.

## Prerequisites

- Appropriate file access permissions
- Required dependencies installed

## Instructions

1. Invoke this skill when the trigger conditions are met
2. Provide necessary context and parameters
3. Review the generated output
4. Apply modifications as needed

## Output

The skill produces structured output relevant to the task.

## Error Handling

- Invalid input: Prompts for correction
- Missing dependencies: Lists required components
- Permission errors: Suggests remediation steps

## Resources

- Project documentation
- Related skills and commands

Overview

This skill enables automated profiling of application performance to find CPU hotspots, memory issues, and slow execution paths. It produces a structured performance profile and concrete optimization recommendations targeted to the app stack. Use it to turn raw runtime metrics into actionable tuning steps.

How this skill works

The skill detects the application stack (for example Python, Node.js, or Java), locates primary entry points and hot code paths, and collects metrics for CPU usage, memory allocation, and execution time. It compiles findings into a profile that highlights bottlenecks, potential memory leaks, and functions or modules with the highest cost. The output includes prioritized recommendations and suggested instrumentation or code changes.

When to use it

  • When you need to find and fix CPU-intensive functions or threads.
  • When investigating suspected memory leaks or excessive allocations.
  • When application response time or throughput is below expectations.
  • Before and after optimization to measure impact of changes.
  • When preparing a performance report for stakeholders or SREs.

Best practices

  • Ensure code is instrumented and profiling hooks are enabled to collect accurate traces.
  • Profile under realistic workloads that mirror production traffic and data shapes.
  • Run iterative changes: apply one optimization at a time and re-profile to confirm improvement.
  • Capture both wall-clock and CPU time, and correlate with memory snapshots for comprehensive analysis.
  • Maintain minimal profiling overhead in production; use sampled or targeted profiles if necessary.

Example use cases

  • Analyze a Node.js API to locate functions causing high memory growth and suspect leaks.
  • Profile a Python data-processing script to identify the most CPU-expensive functions for vectorization.
  • Examine a Java web service to find slow request handlers and long GC pauses affecting latency.
  • Compare before/after profiles to validate the impact of caching or algorithmic changes.
  • Generate a prioritized optimization plan to hand off to developers or SREs.

FAQ

What runtime access does the skill need?

It needs permission to run profilers or collect runtime traces and access to relevant code or deployment artifacts; specifics vary by platform.

Can this run in production?

Yes, but use low-overhead sampling or targeted profiling to minimize impact. Full instrumentation is safer in staging with production-like workloads.