home / skills / harborgrid-justin / lexiflow-premium / performance-profiling-at-scale
/frontend/.github-skills/performance-profiling-at-scale
This skill guides rigorous performance profiling for large React apps, delivering root-cause artifacts and measurable p95/p99 render-time improvements.
npx playbooks add skill harborgrid-justin/lexiflow-premium --skill performance-profiling-at-scaleReview the files below or copy the command above to add this skill to your agents.
---
name: performance-profiling-at-scale
description: Conduct rigorous, statistically grounded performance profiling for large React applications.
---
# Performance Profiling at Scale (React 18)
## Summary
Conduct rigorous, statistically grounded performance profiling for large React applications.
## Key Capabilities
- Isolate render hotspots via flame graphs and commit profiling.
- Use synthetic benchmarks to identify tail-latency regressions.
- Quantify memoization impact with variance-aware metrics.
## PhD-Level Challenges
- Design controlled experiments with confounding factors minimized.
- Use A/B benchmarking with statistical significance thresholds.
- Build performance budgets tied to user-centric KPIs.
## Acceptance Criteria
- Provide profiling artifacts and a root-cause analysis.
- Show measurable improvements in p95/p99 render times.
- Document a reproducible profiling protocol.
This skill performs rigorous, statistically grounded performance profiling for large React applications. It focuses on isolating render hotspots, measuring tail-latency, and quantifying the impact of optimizations. The goal is reproducible analysis that drives measurable improvements in p95/p99 render times.
The skill combines commit profiling, flame graphs, and synthetic benchmarks to pinpoint expensive renders and asynchronous bottlenecks. It runs controlled A/B experiments with variance-aware metrics and significance testing to separate signal from noise. Outputs include profiling artifacts, root-cause analysis, and a documented protocol for reproducible measurement.
Will this work for server-side rendering?
Yes. The approach adapts to SSR by profiling server render time and hydration costs, but you should measure both server and client phases separately.
How do you ensure statistical validity?
Use repeated runs, control groups, and significance tests on tail metrics. Report confidence intervals and effect sizes, not just p-values.