home / skills / jeremylongshore / claude-code-plugins-plus-skills / supabase-performance-tuning

This skill helps you boost Supabase performance by implementing caching, batching, and connection pooling to reduce latency.

npx playbooks add skill jeremylongshore/claude-code-plugins-plus-skills --skill supabase-performance-tuning

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
1.6 KB
---
name: supabase-performance-tuning
description: |
  Optimize Supabase API performance with caching, batching, and connection pooling.
  Use when experiencing slow API responses, implementing caching strategies,
  or optimizing request throughput for Supabase integrations.
  Trigger with phrases like "supabase performance", "optimize supabase",
  "supabase latency", "supabase caching", "supabase slow", "supabase batch".
allowed-tools: Read, Write, Edit
version: 1.0.0
license: MIT
author: Jeremy Longshore <[email protected]>
---

# Supabase Performance Tuning

## Prerequisites
- Supabase SDK installed
- Understanding of async patterns
- Redis or in-memory cache available (optional)
- Performance monitoring in place

## Instructions

### Step 1: Establish Baseline
Measure current latency for critical Supabase operations.

### Step 2: Implement Caching
Add response caching for frequently accessed data.

### Step 3: Enable Batching
Use DataLoader or similar for automatic request batching.

### Step 4: Optimize Connections
Configure connection pooling with keep-alive.

## Output
- Reduced API latency
- Caching layer implemented
- Request batching enabled
- Connection pooling configured

## Error Handling

See `{baseDir}/references/errors.md` for comprehensive error handling.

## Examples

See `{baseDir}/references/examples.md` for detailed examples.

## Resources
- [Supabase Performance Guide](https://supabase.com/docs/performance)
- [DataLoader Documentation](https://github.com/graphql/dataloader)
- [LRU Cache Documentation](https://github.com/isaacs/node-lru-cache)

Overview

This skill optimizes Supabase API performance using caching, batching, and connection pooling to reduce latency and increase throughput. It provides practical patterns and code-ready guidance for integrating response caches, request batching, and connection configuration. Use it to create measurable performance improvements for Supabase-backed services.

How this skill works

The skill walks through establishing a latency baseline, adding a caching layer for frequently read data, and applying request batching (e.g., DataLoader) to collapse duplicate calls. It also shows connection optimizations such as keep-alive and pooling settings to reduce connection overhead. Combined monitoring and error handling guidance ensures safe rollouts and measurable gains.

When to use it

  • When API responses from Supabase are slower than acceptable
  • When you see many repetitive reads that can be cached
  • When high request concurrency is causing connection churn
  • When implementing server-side caching or Redis for shared caches
  • When integrating GraphQL or batchable endpoints that can use DataLoader

Best practices

  • Start by measuring a baseline latency and request profile before changes
  • Cache only idempotent, frequently-read responses and honor TTLs and cache invalidation
  • Use request batching for patterns with repeated lookups; avoid batching for unique writes
  • Configure connection pooling and keep-alive to reduce TCP/TLS setup cost
  • Monitor latency and error rates after each change and roll back if regressions appear

Example use cases

  • Reduce read latency for user profile endpoints with a short TTL Redis cache
  • Batch hundreds of foreign-key lookups in GraphQL resolvers using DataLoader
  • Configure pooled Supabase connections in serverless functions to avoid cold-connection spikes
  • Add an in-memory LRU cache for per-instance caching in low-concurrency services
  • Implement a hybrid cache: short in-process LRU plus shared Redis for multi-instance deployments

FAQ

Will caching risk serving stale data?

Yes—use conservative TTLs, publish cache invalidation on writes, or implement cache-aside patterns to minimize staleness.

Is batching safe for write operations?

Batching is best suited for read-heavy operations. For writes, use explicit transaction semantics and avoid collapsing distinct write intents into a single batch.