home / skills / athola / claude-night-market / python-performance
This skill helps you profile and optimize Python performance by identifying bottlenecks, reducing latency, and improving memory usage.
npx playbooks add skill athola/claude-night-market --skill python-performanceReview the files below or copy the command above to add this skill to your agents.
---
name: python-performance
description: 'Consult this skill for Python performance profiling and optimization.
Use when debugging slow code, identifying bottlenecks, optimizing memory, benchmarking
performance, production profiling. Do not use when async concurrency - use python-async
instead. DO NOT use when: CPU/GPU system monitoring - use conservation:cpu-gpu-performance.'
category: performance
tags:
- python
- performance
- profiling
- optimization
- cProfile
- memory
tools:
- profiler-runner
- memory-analyzer
- benchmark-suite
usage_patterns:
- performance-analysis
- bottleneck-identification
- memory-optimization
- algorithm-optimization
complexity: intermediate
estimated_tokens: 1200
progressive_loading: true
modules:
- profiling-tools
- optimization-patterns
- memory-management
- benchmarking-tools
- best-practices
---
# Python Performance Optimization
Profiling and optimization patterns for Python code.
## Table of Contents
1. [Quick Start](#quick-start)
## Quick Start
```python
# Basic timing
import timeit
time = timeit.timeit("sum(range(1000000))", number=100)
print(f"Average: {time/100:.6f}s")
```
**Verification:** Run the command with `--help` flag to verify availability.
## When To Use
- Identifying performance bottlenecks
- Reducing application latency
- Optimizing CPU-intensive operations
- Reducing memory consumption
- Profiling production applications
- Improving database query performance
## When NOT To Use
- Async concurrency - use python-async
instead
- CPU/GPU system monitoring - use conservation:cpu-gpu-performance
- Async concurrency - use python-async
instead
- CPU/GPU system monitoring - use conservation:cpu-gpu-performance
## Modules
This skill is organized into focused modules for progressive loading:
### [profiling-tools](modules/profiling-tools.md)
CPU profiling with cProfile, line profiling, memory profiling, and production profiling with py-spy. Essential for identifying where your code spends time and memory.
### [optimization-patterns](modules/optimization-patterns.md)
Ten proven optimization patterns including list comprehensions, generators, caching, string concatenation, data structures, NumPy, multiprocessing, and database operations.
### [memory-management](modules/memory-management.md)
Memory optimization techniques including leak tracking with tracemalloc and weak references for caches. Depends on profiling-tools.
### [benchmarking-tools](modules/benchmarking-tools.md)
Benchmarking tools including custom decorators and pytest-benchmark for verifying performance improvements.
### [best-practices](modules/best-practices.md)
Best practices, common pitfalls, and exit criteria for performance optimization work. Synthesizes guidance from profiling-tools and optimization-patterns.
## Exit Criteria
- Profiled code to identify bottlenecks
- Applied appropriate optimization patterns
- Verified improvements with benchmarks
- Memory usage acceptable
- No performance regressions
## Troubleshooting
### Common Issues
**Command not found**
Ensure all dependencies are installed and in PATH
**Permission errors**
Check file permissions and run with appropriate privileges
**Unexpected behavior**
Enable verbose logging with `--verbose` flag
This skill helps you profile and optimize Python code to reduce latency, lower CPU/memory use, and verify improvements with benchmarks. It bundles practical tools and patterns for CPU and memory profiling, benchmarking, and targeted optimizations. Use it when you need reproducible performance gains in Python applications.
The skill inspects runtime hotspots using cProfile, line_profiler, and sampling profilers like py-spy to locate costly functions and lines. It guides memory analysis with tracemalloc and memory-profiler, applies optimization patterns (generators, caching, vectorized operations), and verifies changes with custom timing utilities or pytest-benchmark. It also offers production-safe profiling approaches and troubleshooting tips.
Can I use this skill for async concurrency issues?
No. For async-specific concurrency problems use a dedicated async performance skill; this skill focuses on CPU and memory profiling for synchronous code.
Is it safe to profile production services?
Use sampling profilers like py-spy or lightweight tracing in production to avoid slowing services. Avoid heavy line profiling on live traffic; capture representative traces instead.