home / skills / jeremylongshore / claude-code-plugins-plus-skills / clay-performance-tuning
/plugins/saas-packs/clay-pack/skills/clay-performance-tuning
This skill helps you dramatically improve Clay API performance by implementing caching, batching, and connection pooling for faster, scalable integrations.
npx playbooks add skill jeremylongshore/claude-code-plugins-plus-skills --skill clay-performance-tuningReview the files below or copy the command above to add this skill to your agents.
---
name: clay-performance-tuning
description: |
Optimize Clay API performance with caching, batching, and connection pooling.
Use when experiencing slow API responses, implementing caching strategies,
or optimizing request throughput for Clay integrations.
Trigger with phrases like "clay performance", "optimize clay",
"clay latency", "clay caching", "clay slow", "clay batch".
allowed-tools: Read, Write, Edit
version: 1.0.0
license: MIT
author: Jeremy Longshore <[email protected]>
---
# Clay Performance Tuning
## Overview
Optimize Clay API performance with caching, batching, and connection pooling.
## Prerequisites
- Clay SDK installed
- Understanding of async patterns
- Redis or in-memory cache available (optional)
- Performance monitoring in place
## Latency Benchmarks
| Operation | P50 | P95 | P99 |
|-----------|-----|-----|-----|
| Read | 50ms | 150ms | 300ms |
| Write | 100ms | 250ms | 500ms |
| List | 75ms | 200ms | 400ms |
## Caching Strategy
### Response Caching
```typescript
import { LRUCache } from 'lru-cache';
const cache = new LRUCache<string, any>({
max: 1000,
ttl: 60000, // 1 minute
updateAgeOnGet: true,
});
async function cachedClayRequest<T>(
key: string,
fetcher: () => Promise<T>,
ttl?: number
): Promise<T> {
const cached = cache.get(key);
if (cached) return cached as T;
const result = await fetcher();
cache.set(key, result, { ttl });
return result;
}
```
### Redis Caching (Distributed)
```typescript
import Redis from 'ioredis';
const redis = new Redis(process.env.REDIS_URL);
async function cachedWithRedis<T>(
key: string,
fetcher: () => Promise<T>,
ttlSeconds = 60
): Promise<T> {
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
const result = await fetcher();
await redis.setex(key, ttlSeconds, JSON.stringify(result));
return result;
}
```
## Request Batching
```typescript
import DataLoader from 'dataloader';
const clayLoader = new DataLoader<string, any>(
async (ids) => {
// Batch fetch from Clay
const results = await clayClient.batchGet(ids);
return ids.map(id => results.find(r => r.id === id) || null);
},
{
maxBatchSize: 100,
batchScheduleFn: callback => setTimeout(callback, 10),
}
);
// Usage - automatically batched
const [item1, item2, item3] = await Promise.all([
clayLoader.load('id-1'),
clayLoader.load('id-2'),
clayLoader.load('id-3'),
]);
```
## Connection Optimization
```typescript
import { Agent } from 'https';
// Keep-alive connection pooling
const agent = new Agent({
keepAlive: true,
maxSockets: 10,
maxFreeSockets: 5,
timeout: 30000,
});
const client = new ClayClient({
apiKey: process.env.CLAY_API_KEY!,
httpAgent: agent,
});
```
## Pagination Optimization
```typescript
async function* paginatedClayList<T>(
fetcher: (cursor?: string) => Promise<{ data: T[]; nextCursor?: string }>
): AsyncGenerator<T> {
let cursor: string | undefined;
do {
const { data, nextCursor } = await fetcher(cursor);
for (const item of data) {
yield item;
}
cursor = nextCursor;
} while (cursor);
}
// Usage
for await (const item of paginatedClayList(cursor =>
clayClient.list({ cursor, limit: 100 })
)) {
await process(item);
}
```
## Performance Monitoring
```typescript
async function measuredClayCall<T>(
operation: string,
fn: () => Promise<T>
): Promise<T> {
const start = performance.now();
try {
const result = await fn();
const duration = performance.now() - start;
console.log({ operation, duration, status: 'success' });
return result;
} catch (error) {
const duration = performance.now() - start;
console.error({ operation, duration, status: 'error', error });
throw error;
}
}
```
## Instructions
### Step 1: Establish Baseline
Measure current latency for critical Clay operations.
### Step 2: Implement Caching
Add response caching for frequently accessed data.
### Step 3: Enable Batching
Use DataLoader or similar for automatic request batching.
### Step 4: Optimize Connections
Configure connection pooling with keep-alive.
## Output
- Reduced API latency
- Caching layer implemented
- Request batching enabled
- Connection pooling configured
## Error Handling
| Issue | Cause | Solution |
|-------|-------|----------|
| Cache miss storm | TTL expired | Use stale-while-revalidate |
| Batch timeout | Too many items | Reduce batch size |
| Connection exhausted | No pooling | Configure max sockets |
| Memory pressure | Cache too large | Set max cache entries |
## Examples
### Quick Performance Wrapper
```typescript
const withPerformance = <T>(name: string, fn: () => Promise<T>) =>
measuredClayCall(name, () =>
cachedClayRequest(`cache:${name}`, fn)
);
```
## Resources
- [Clay Performance Guide](https://docs.clay.com/performance)
- [DataLoader Documentation](https://github.com/graphql/dataloader)
- [LRU Cache Documentation](https://github.com/isaacs/node-lru-cache)
## Next Steps
For cost optimization, see `clay-cost-tuning`.This skill helps you optimize Clay API performance by applying caching, request batching, and connection pooling. It provides practical patterns and code snippets for response caching (in-memory and Redis), DataLoader-style batching, keep-alive connection pooling, and pagination streaming. Use it to reduce latency, lower request volume, and stabilize throughput for Clay integrations.
The skill inspects common Clay API access patterns and implements three complementary optimizations: short-lived response caches to avoid repeat reads, batched fetches to collapse simultaneous loads into single requests, and persistent HTTP agents to reuse TCP connections. It also includes pagination generators and simple monitoring wrappers to measure and log operation latency. Combined, these techniques reduce per-request latency and overall API load.
How long should I set cache TTLs?
Use short TTLs (tens of seconds to a few minutes) for dynamic data; longer for stable data. Employ stale-while-revalidate to reduce cache miss storms.
What batch size is safe?
Start with conservative sizes (50–100), observe latency and error rates, and reduce if batches time out or overload the service.