home / skills / jeremylongshore / claude-code-plugins-plus-skills / supabase-load-scale

This skill helps you implement and validate Supabase load testing with auto-scaling and capacity planning, delivering scalable performance insights and

npx playbooks add skill jeremylongshore/claude-code-plugins-plus-skills --skill supabase-load-scale

Review the files below or copy the command above to add this skill to your agents.

Files (6)
SKILL.md
1.5 KB
---
name: supabase-load-scale
description: |
  Implement Supabase load testing, auto-scaling, and capacity planning strategies.
  Use when running performance tests, configuring horizontal scaling,
  or planning capacity for Supabase integrations.
  Trigger with phrases like "supabase load test", "supabase scale",
  "supabase performance test", "supabase capacity", "supabase k6", "supabase benchmark".
allowed-tools: Read, Write, Edit, Bash(k6:*), Bash(kubectl:*)
version: 1.0.0
license: MIT
author: Jeremy Longshore <[email protected]>
---

# Supabase Load Scale

## Prerequisites
- k6 load testing tool installed
- Kubernetes cluster with HPA configured
- Prometheus for metrics collection
- Test environment API keys

## Instructions

### Step 1: Create Load Test Script
Write k6 test script with appropriate thresholds.

### Step 2: Configure Auto-Scaling
Set up HPA with CPU and custom metrics.

### Step 3: Run Load Test
Execute test and collect metrics.

### Step 4: Analyze and Document
Record results in benchmark template.

## Output
- Load test script created
- HPA configured
- Benchmark results documented
- Capacity recommendations defined

## Error Handling

See `{baseDir}/references/errors.md` for comprehensive error handling.

## Examples

See `{baseDir}/references/examples.md` for detailed examples.

## Resources
- [k6 Documentation](https://k6.io/docs/)
- [Kubernetes HPA](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/)
- [Supabase Rate Limits](https://supabase.com/docs/rate-limits)

Overview

This skill implements Supabase load testing, auto-scaling, and capacity planning workflows to validate and scale Supabase-backed services. It provides a practical sequence: create k6 load scripts, run tests against a staging environment, collect Prometheus metrics, and configure Kubernetes HPA for horizontal scaling. The goal is repeatable benchmarks and clear capacity recommendations.

How this skill works

You write k6 scenarios that simulate realistic traffic and set pass/fail thresholds for latency and error rates. Tests run against a test environment using API keys while Prometheus scrapes metrics from services and the cluster. Use collected CPU, memory, and custom metrics to drive HPA configuration and define scaling targets. Analyze results in a benchmark template to produce actionable capacity plans and recommendations.

When to use it

  • Before production launches to validate Supabase-backed endpoints under load
  • When configuring horizontal pod autoscaling for services relying on Supabase
  • During performance regressions or after major schema or query changes
  • To define rate limit and capacity planning for expected traffic growth
  • When benchmarking Supabase integrations with different instance sizes or network configurations

Best practices

  • Use a dedicated staging environment with the same topology as production
  • Start with conservative k6 thresholds and iterate based on real metrics
  • Collect both cluster-level (CPU/memory) and application-level (latency/errors) metrics
  • Drive HPA using both CPU and custom metrics (e.g., requests/s or queue depth)
  • Document every run in a benchmark template including workload profile and test data

Example use cases

  • Create a k6 script to simulate 10k concurrent users hitting Supabase row-level policies
  • Measure and tune HPA to maintain p95 latency under a steady traffic ramp
  • Compare capacity requirements for different database plans or region deployments
  • Validate burst handling by running spike tests and confirming autoscaling behavior
  • Produce a capacity report that maps concurrent connections to recommended pod counts

FAQ

What tools do I need to run these workflows?

Install k6 for load generation, Prometheus for metrics collection, and have a Kubernetes cluster with HPA enabled. Ensure test environment API keys are available.

How do I pick HPA metrics for Supabase workloads?

Combine CPU/memory targets with custom application metrics such as requests-per-second, error rate, or active DB connections to align scaling with real load patterns.

How should I interpret benchmark results?

Focus on latency percentiles (p95/p99), error rates, and resource consumption. Translate those into expected pod counts and instance sizes, then validate with follow-up tests.