home / skills / athola / claude-night-market / python-performance

python-performance skill

/plugins/parseltongue/skills/python-performance

This skill helps you profile and optimize Python performance by identifying bottlenecks, reducing latency, and improving memory usage.

npx playbooks add skill athola/claude-night-market --skill python-performance

Review the files below or copy the command above to add this skill to your agents.

Files (6)
SKILL.md
3.2 KB
---
name: python-performance
description: 'Consult this skill for Python performance profiling and optimization.
  Use when debugging slow code, identifying bottlenecks, optimizing memory, benchmarking
  performance, production profiling. Do not use when async concurrency - use python-async
  instead. DO NOT use when: CPU/GPU system monitoring - use conservation:cpu-gpu-performance.'
category: performance
tags:
- python
- performance
- profiling
- optimization
- cProfile
- memory
tools:
- profiler-runner
- memory-analyzer
- benchmark-suite
usage_patterns:
- performance-analysis
- bottleneck-identification
- memory-optimization
- algorithm-optimization
complexity: intermediate
estimated_tokens: 1200
progressive_loading: true
modules:
- profiling-tools
- optimization-patterns
- memory-management
- benchmarking-tools
- best-practices
---

# Python Performance Optimization

Profiling and optimization patterns for Python code.

## Table of Contents

1. [Quick Start](#quick-start)

## Quick Start

```python
# Basic timing
import timeit
time = timeit.timeit("sum(range(1000000))", number=100)
print(f"Average: {time/100:.6f}s")
```
**Verification:** Run the command with `--help` flag to verify availability.

## When To Use

- Identifying performance bottlenecks
- Reducing application latency
- Optimizing CPU-intensive operations
- Reducing memory consumption
- Profiling production applications
- Improving database query performance

## When NOT To Use

- Async concurrency - use python-async
  instead
- CPU/GPU system monitoring - use conservation:cpu-gpu-performance
- Async concurrency - use python-async
  instead
- CPU/GPU system monitoring - use conservation:cpu-gpu-performance

## Modules

This skill is organized into focused modules for progressive loading:

### [profiling-tools](modules/profiling-tools.md)
CPU profiling with cProfile, line profiling, memory profiling, and production profiling with py-spy. Essential for identifying where your code spends time and memory.

### [optimization-patterns](modules/optimization-patterns.md)
Ten proven optimization patterns including list comprehensions, generators, caching, string concatenation, data structures, NumPy, multiprocessing, and database operations.

### [memory-management](modules/memory-management.md)
Memory optimization techniques including leak tracking with tracemalloc and weak references for caches. Depends on profiling-tools.

### [benchmarking-tools](modules/benchmarking-tools.md)
Benchmarking tools including custom decorators and pytest-benchmark for verifying performance improvements.

### [best-practices](modules/best-practices.md)
Best practices, common pitfalls, and exit criteria for performance optimization work. Synthesizes guidance from profiling-tools and optimization-patterns.

## Exit Criteria

- Profiled code to identify bottlenecks
- Applied appropriate optimization patterns
- Verified improvements with benchmarks
- Memory usage acceptable
- No performance regressions
## Troubleshooting

### Common Issues

**Command not found**
Ensure all dependencies are installed and in PATH

**Permission errors**
Check file permissions and run with appropriate privileges

**Unexpected behavior**
Enable verbose logging with `--verbose` flag

Overview

This skill helps you profile and optimize Python code to reduce latency, lower CPU/memory use, and verify improvements with benchmarks. It bundles practical tools and patterns for CPU and memory profiling, benchmarking, and targeted optimizations. Use it when you need reproducible performance gains in Python applications.

How this skill works

The skill inspects runtime hotspots using cProfile, line_profiler, and sampling profilers like py-spy to locate costly functions and lines. It guides memory analysis with tracemalloc and memory-profiler, applies optimization patterns (generators, caching, vectorized operations), and verifies changes with custom timing utilities or pytest-benchmark. It also offers production-safe profiling approaches and troubleshooting tips.

When to use it

  • Debugging slow code and locating hotspots
  • Reducing latency in request handlers or scripts
  • Optimizing CPU-intensive algorithms and data processing
  • Reducing memory consumption and tracking leaks
  • Benchmarking changes and validating performance improvements

Best practices

  • Profile before optimizing — measure to find real bottlenecks
  • Start with sampling profilers in production and line profilers locally
  • Use benchmarks to prove improvements and prevent regressions
  • Prefer algorithmic changes and better data structures over micro-optimizations
  • Isolate changes and run tests under realistic data loads

Example use cases

  • Identify a slow API endpoint and optimize the handler and DB queries
  • Reduce peak memory in a pipeline by replacing lists with generators and streaming I/O
  • Profile a data transformation to replace Python loops with NumPy/vectorized code
  • Benchmark two caching strategies to choose the best trade-off
  • Use py-spy to sample a production process without stopping it

FAQ

Can I use this skill for async concurrency issues?

No. For async-specific concurrency problems use a dedicated async performance skill; this skill focuses on CPU and memory profiling for synchronous code.

Is it safe to profile production services?

Use sampling profilers like py-spy or lightweight tracing in production to avoid slowing services. Avoid heavy line profiling on live traffic; capture representative traces instead.