home / skills / cloudai-x / claude-workflow-v2 / optimizing-performance

optimizing-performance skill

/skills/optimizing-performance

This skill analyzes and optimizes application performance across frontend, backend, and database layers, guiding profiling and targeted improvements with

npx playbooks add skill cloudai-x/claude-workflow-v2 --skill optimizing-performance

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
5.2 KB
---
name: optimizing-performance
description: Analyzes and optimizes application performance across frontend, backend, and database layers. Use when diagnosing slowness, improving load times, optimizing queries, reducing bundle size, or when asked about performance issues.
---

# Optimizing Performance

## Performance Optimization Workflow

Copy this checklist and track progress:

```
Performance Optimization Progress:
- [ ] Step 1: Measure baseline performance
- [ ] Step 2: Identify bottlenecks
- [ ] Step 3: Apply targeted optimizations
- [ ] Step 4: Measure again and compare
- [ ] Step 5: Repeat if targets not met
```

**Critical Rule**: Never optimize without data. Always profile before and after changes.

## Step 1: Measure Baseline

### Profiling Commands
```bash
# Node.js profiling
node --prof app.js
node --prof-process isolate*.log > profile.txt

# Python profiling
python -m cProfile -o profile.stats app.py
python -m pstats profile.stats

# Web performance
lighthouse https://example.com --output=json
```

## Step 2: Identify Bottlenecks

### Common Bottleneck Categories
| Category | Symptoms | Tools |
|----------|----------|-------|
| CPU | High CPU usage, slow computation | Profiler, flame graphs |
| Memory | High RAM, GC pauses, OOM | Heap snapshots, memory profiler |
| I/O | Slow disk/network, waiting | strace, network inspector |
| Database | Slow queries, lock contention | Query analyzer, EXPLAIN |

## Step 3: Apply Optimizations

### Frontend Optimizations

**Bundle Size:**
```javascript
// ❌ Import entire library
import _ from 'lodash';

// ✅ Import only needed functions
import debounce from 'lodash/debounce';

// ✅ Use dynamic imports for code splitting
const HeavyComponent = lazy(() => import('./HeavyComponent'));
```

**Rendering:**
```javascript
// ❌ Render on every parent update
function Child({ data }) {
  return <ExpensiveComponent data={data} />;
}

// ✅ Memoize when props don't change
const Child = memo(function Child({ data }) {
  return <ExpensiveComponent data={data} />;
});

// ✅ Use useMemo for expensive computations
const processed = useMemo(() => expensiveCalc(data), [data]);
```

**Images:**
```html
<!-- ❌ Unoptimized -->
<img src="large-image.jpg" />

<!-- ✅ Optimized -->
<img
  src="image.webp"
  srcset="image-300.webp 300w, image-600.webp 600w"
  sizes="(max-width: 600px) 300px, 600px"
  loading="lazy"
  decoding="async"
/>
```

### Backend Optimizations

**Database Queries:**
```sql
-- ❌ N+1 Query Problem
SELECT * FROM users;
-- Then for each user:
SELECT * FROM orders WHERE user_id = ?;

-- ✅ Single query with JOIN
SELECT u.*, o.*
FROM users u
LEFT JOIN orders o ON u.id = o.user_id;

-- ✅ Or use pagination
SELECT * FROM users LIMIT 100 OFFSET 0;
```

**Caching Strategy:**
```javascript
// Multi-layer caching
const getUser = async (id) => {
  // L1: In-memory cache (fastest)
  let user = memoryCache.get(`user:${id}`);
  if (user) return user;

  // L2: Redis cache (fast)
  user = await redis.get(`user:${id}`);
  if (user) {
    memoryCache.set(`user:${id}`, user, 60);
    return JSON.parse(user);
  }

  // L3: Database (slow)
  user = await db.users.findById(id);
  await redis.setex(`user:${id}`, 3600, JSON.stringify(user));
  memoryCache.set(`user:${id}`, user, 60);

  return user;
};
```

**Async Processing:**
```javascript
// ❌ Blocking operation
app.post('/upload', async (req, res) => {
  await processVideo(req.file);  // Takes 5 minutes
  res.send('Done');
});

// ✅ Queue for background processing
app.post('/upload', async (req, res) => {
  const jobId = await queue.add('processVideo', { file: req.file });
  res.send({ jobId, status: 'processing' });
});
```

### Algorithm Optimizations

```javascript
// ❌ O(n²) - nested loops
function findDuplicates(arr) {
  const duplicates = [];
  for (let i = 0; i < arr.length; i++) {
    for (let j = i + 1; j < arr.length; j++) {
      if (arr[i] === arr[j]) duplicates.push(arr[i]);
    }
  }
  return duplicates;
}

// ✅ O(n) - hash map
function findDuplicates(arr) {
  const seen = new Set();
  const duplicates = new Set();
  for (const item of arr) {
    if (seen.has(item)) duplicates.add(item);
    seen.add(item);
  }
  return [...duplicates];
}
```

## Step 4: Measure Again

After applying optimizations, re-run profiling and compare:

```
Comparison Checklist:
- [ ] Run same profiling tools as baseline
- [ ] Compare metrics before vs after
- [ ] Verify no regressions in other areas
- [ ] Document improvement percentages
```

## Performance Targets

### Web Vitals
| Metric | Good | Needs Work | Poor |
|--------|------|------------|------|
| LCP | < 2.5s | 2.5-4s | > 4s |
| FID | < 100ms | 100-300ms | > 300ms |
| CLS | < 0.1 | 0.1-0.25 | > 0.25 |
| TTFB | < 800ms | 800ms-1.8s | > 1.8s |

### API Performance
| Metric | Target |
|--------|--------|
| P50 Latency | < 100ms |
| P95 Latency | < 500ms |
| P99 Latency | < 1s |
| Error Rate | < 0.1% |

## Validation

After optimization, validate results:

```
Performance Validation:
- [ ] Metrics improved from baseline
- [ ] No functionality regressions
- [ ] No new errors introduced
- [ ] Changes are sustainable (not one-time fixes)
- [ ] Performance gains documented
```

If targets not met, return to Step 2 and identify remaining bottlenecks.

Overview

This skill analyzes and optimizes application performance across frontend, backend, and database layers. It guides you through a data-driven workflow: measure baseline, identify bottlenecks, apply targeted fixes, and verify improvements. Use it to diagnose slowness, reduce load times, and improve system efficiency.

How this skill works

The skill inspects runtime profiles, web vitals, database query plans, and resource usage to locate CPU, memory, I/O, and database bottlenecks. It recommends focused changes—bundle splitting, memoization, optimized queries, caching, queues, and algorithmic improvements—then prescribes how to re-measure and validate gains. Metrics and checks ensure optimizations are safe and repeatable.

When to use it

  • Investigating slow page loads or poor Core Web Vitals
  • Diagnosing high API latency, tail latency, or error spikes
  • Finding and fixing inefficient database queries or locks
  • Reducing frontend bundle size and render cost
  • Optimizing background jobs or long-running tasks

Best practices

  • Never optimize without data: profile before and after every change
  • Target the highest-impact bottleneck first (CPU, I/O, DB, or memory)
  • Apply multi-layer caching: in-memory, Redis, then DB
  • Prefer non-blocking work: push long tasks to background queues
  • Measure using the same tools and scenarios to ensure apples-to-apples comparison

Example use cases

  • Use profiling to find and fix an N+1 query by converting many queries into a single JOIN
  • Reduce first contentful paint by code-splitting and lazy-loading heavy components
  • Cut API P95 latency by adding Redis caching and optimizing slow SQL with EXPLAIN
  • Prevent server stalls by moving video processing to a background queue
  • Replace an O(n²) algorithm with an O(n) approach to eliminate CPU hotspots

FAQ

What metrics should I collect before optimizing?

Collect samples from profilers (CPU, memory), Lighthouse or web vitals for frontend, and query plans plus latency percentiles (P50/P95/P99) for APIs and DB.

How do I know an optimization is safe?

Re-run the same profiling tools, compare before/after metrics, run test suites, and monitor for regressions or new errors in staging before production rollout.