home / skills / dropseed / plain / plain-optimize
This skill helps you identify slow queries and N+1 problems by capturing traces, analyzing bottlenecks, and guiding performance fixes.
npx playbooks add skill dropseed/plain --skill plain-optimizeReview the files below or copy the command above to add this skill to your agents.
---
name: plain-optimize
description: Captures and analyzes performance traces to identify slow queries and N+1 problems. Use when analyzing performance or optimizing database queries.
---
# Performance Optimization Workflow
## 1. Capture Traces
Make a request with tracing enabled:
```
uv run plain request /path --user 1 --header "Observer: persist"
```
## 2. Find Traces
```
uv run plain observer traces --request-id <request-id>
```
## 3. Analyze Trace
```
uv run plain observer trace <trace-id> --json
```
## 4. Identify Bottlenecks
Look for:
- N+1 queries (many similar queries)
- Slow database queries
- Missing indexes
- Unnecessary work in hot paths
## 5. Apply Fixes
- Add `select_related()` / `prefetch_related()` for N+1
- Add database indexes for slow queries
- Cache expensive computations
## 6. Verify Improvement
Re-run the trace and compare.
This skill captures and analyzes performance traces to surface slow queries and N+1 problems in Python web applications. It guides you from trace collection through analysis to verifying improvements, focusing on database hotspots and inefficient request paths. Use it to prioritize fixes that yield measurable latency and resource gains.
The skill instruments requests to record detailed traces, including database calls and timing information. You locate traces by request ID, inspect a trace in JSON to see spans and query patterns, and identify bottlenecks such as repeated similar queries or slow single queries. After fixes are applied, you re-run traces to confirm reduced latency and fewer redundant queries.
How do I start capturing a trace?
Enable tracing for a request and persist the observer header when making the request so the trace is recorded for later inspection.
What indicates an N+1 problem in a trace?
Many repeated similar query spans for different object IDs or rows, often clustered in a single request span.
How do I verify that my fix worked?
Re-run the same traced request, compare the trace JSON for reduced query counts and faster span durations, and measure end-to-end latency.