home / skills / amnadtaowsoam / cerebraskills / performance-regression-gates

performance-regression-gates skill

/68-quality-gates-ci-policies/performance-regression-gates

This skill helps prevent performance regression by enforcing bundle size, API response, and load-time thresholds across CI.

npx playbooks add skill amnadtaowsoam/cerebraskills --skill performance-regression-gates

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.5 KB
---
name: Performance Regression Gates
description: Gates for detecting performance regression through benchmarks, bundle size, and load time monitoring
---

# Performance Regression Gates

## Overview

Gates for preventing performance regression - bundle size, response time, load time - must not exceed thresholds

## Why This Matters

- **User experience**: Slow app = bad UX
- **Prevent degradation**: Catch before users notice
- **Accountability**: Know which PR made it slower
- **Automated**: No need for manual testing

---

## Performance Metrics

### 1. Bundle Size
```bash
# Check bundle size
npm run build
npm run analyze

# Threshold: No >10% increase
Before: 250 KB
After:  260 KB (+4%) ✓ Pass
After:  280 KB (+12%) ✗ Fail
```

### 2. Response Time
```bash
# API benchmark
npm run benchmark:api

# Threshold: No >20% slower
GET /users: 50ms → 55ms (+10%) ✓ Pass
GET /users: 50ms → 65ms (+30%) ✗ Fail
```

### 3. Load Time
```bash
# Lighthouse CI
npm run lighthouse

# Threshold: Score ≥90
Performance: 95 ✓ Pass
Performance: 85 ✗ Fail
```

---

## CI Pipeline

```yaml
# .github/workflows/performance.yml
name: Performance Gates
on: [pull_request]

jobs:
  performance:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      
      - name: Build
        run: npm run build
      
      - name: Bundle Size Check
        uses: andresz1/size-limit-action@v1
        with:
          github_token: ${{ secrets.GITHUB_TOKEN }}
          limit: 10%  # Max 10% increase
      
      - name: Lighthouse CI
        uses: treosh/lighthouse-ci-action@v9
        with:
          runs: 3
          assertions:
            performance: 90
            accessibility: 90
      
      - name: API Benchmark
        run: npm run benchmark:api
```

---

## Configuration

### Bundle Size Limit
```json
{
  "size-limit": [
    {
      "path": "dist/bundle.js",
      "limit": "250 KB",
      "gzip": true
    }
  ]
}
```

### Lighthouse Thresholds
```json
{
  "ci": {
    "assert": {
      "assertions": {
        "performance": ["error", {"minScore": 0.9}],
        "first-contentful-paint": ["error", {"maxNumericValue": 2000}],
        "interactive": ["error", {"maxNumericValue": 3500}]
      }
    }
  }
}
```

---

## Summary

**Performance Gates:** Prevent performance regression

**Metrics:**
- Bundle size: No >10% increase
- Response time: No >20% slower
- Lighthouse: Score ≥90

**Tools:**
- size-limit (bundle)
- Lighthouse CI (load time)
- Custom benchmarks (API)

**Action:** Block merge if regression detected

Overview

This skill enforces performance regression gates in CI to prevent bundle size growth, slower API responses, and degraded page load metrics. It integrates benchmark checks, bundle analysis, and Lighthouse CI assertions to block merges when thresholds are breached. I provide clear thresholds and automation patterns so teams catch regressions before users notice.

How this skill works

The skill runs three automated checks in pull request pipelines: bundle-size analysis, API benchmarks, and Lighthouse audits. Each check compares current results against configured thresholds (percent change or absolute score) and fails the job when limits are exceeded. Results are reported in CI so authors can see which metric regressed and why.

When to use it

  • Protect main or release branches from performance regressions
  • Enforce SLOs for frontend load time and API latency
  • Automatically gate pull requests that modify client bundles or backend endpoints
  • Ensure consistent user experience after refactors or dependency updates
  • Add measurable performance requirements to team CI/CD workflows

Best practices

  • Set conservative, achievable thresholds and tighten them gradually
  • Measure multiple runs and use medians to reduce noise
  • Include gzip/minified bundle size in comparisons
  • Run benchmarks in a consistent environment or use CI runners with stable specs
  • Fail fast on clear regressions but allow annotation for flaky or noisy cases

Example use cases

  • Block PRs that increase bundle size by more than 10% compared to base branch
  • Fail CI if API median latency grows beyond 20% for critical endpoints
  • Require Lighthouse performance score >= 90 before merging a UI change
  • Combine size-limit, Lighthouse CI, and custom benchmark scripts in a single workflow
  • Annotate PRs with detailed metric diffs to point authors to problematic files or endpoints

FAQ

What thresholds should I start with?

Begin with the recommended values: bundle size ≤10% increase, API latency ≤20% slower, Lighthouse performance ≥90. Adjust based on historical variance and team goals.

How do I handle noisy benchmark results?

Run multiple iterations and use median or p95 values, stabilize the CI environment, or mark flaky tests as advisory while you improve measurement reliability.