home / skills / secondsky / claude-skills / idempotency-handling

This skill helps you implement idempotent API operations using idempotency keys, Redis caching, and DB constraints to prevent duplicates.

npx playbooks add skill secondsky/claude-skills --skill idempotency-handling

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
5.5 KB
---
name: idempotency-handling
description: Idempotent API operations with idempotency keys, Redis caching, DB constraints. Use for payment systems, webhook retries, safe retries, or encountering duplicate processing, race conditions, key expiry errors.
---

# Idempotency Handling

Ensure operations produce identical results regardless of execution count.

## Idempotency Key Pattern

```javascript
const redis = require('redis');
const client = redis.createClient();

async function idempotencyMiddleware(req, res, next) {
  const key = req.headers['idempotency-key'];
  if (!key) return next();

  const cached = await client.get(`idempotency:${key}`);
  if (cached) {
    const { status, body } = JSON.parse(cached);
    return res.status(status).json(body);
  }

  // Store original send
  const originalSend = res.json.bind(res);
  res.json = async (body) => {
    await client.setEx(
      `idempotency:${key}`,
      86400, // 24 hours
      JSON.stringify({ status: res.statusCode, body })
    );
    return originalSend(body);
  };

  next();
}
```

## Database-Backed Idempotency

```sql
CREATE TABLE idempotency_keys (
  key VARCHAR(255) PRIMARY KEY,
  request_hash VARCHAR(64) NOT NULL,
  response JSONB,
  status VARCHAR(20) DEFAULT 'processing',
  created_at TIMESTAMP DEFAULT NOW(),
  expires_at TIMESTAMP DEFAULT NOW() + INTERVAL '24 hours'
);

CREATE INDEX idx_idempotency_expires ON idempotency_keys(expires_at);
```

```javascript
async function processPayment(idempotencyKey, payload) {
  const requestHash = crypto.createHash('sha256')
    .update(JSON.stringify(payload)).digest('hex');

  // Try to insert with 'processing' status - only one request will succeed
  const insertResult = await db.query(
    `INSERT INTO idempotency_keys (key, request_hash, status)
     VALUES ($1, $2, 'processing')
     ON CONFLICT (key) DO NOTHING
     RETURNING *`,
    [idempotencyKey, requestHash]
  );

  // If we inserted the row (rowCount === 1), we're responsible for processing
  if (insertResult.rowCount === 1) {
    try {
      // Execute the payment
      const result = await executePayment(payload);

      // Update to completed with response
      await db.query(
        'UPDATE idempotency_keys SET status = $1, response = $2 WHERE key = $3',
        ['completed', JSON.stringify(result), idempotencyKey]
      );

      return result;
    } catch (error) {
      // Mark as failed on error
      await db.query(
        'UPDATE idempotency_keys SET status = $1, response = $2 WHERE key = $3',
        ['failed', JSON.stringify({ error: error.message }), idempotencyKey]
      );
      throw error;
    }
  }

  // Another request is/was processing this key - check status
  const existing = await db.query(
    'SELECT * FROM idempotency_keys WHERE key = $1',
    [idempotencyKey]
  );

  const row = existing.rows[0];
  if (!row) {
    throw new Error('Unexpected: idempotency key vanished');
  }

  // Verify request hasn't changed
  if (row.request_hash !== requestHash) {
    throw new Error('Idempotency key reused with different request');
  }

  // Check status
  if (row.status === 'completed') {
    return JSON.parse(row.response);
  } else if (row.status === 'processing') {
    throw new Error('Request already processing - retry later');
  } else if (row.status === 'failed') {
    const failedResponse = JSON.parse(row.response);
    throw new Error(`Previous attempt failed: ${failedResponse.error}`);
  }

  throw new Error(`Unknown status: ${row.status}`);
}
```

## When to Apply

- Payment processing
- Order creation
- Webhook handling
- Email sending
- Any operation where duplicates cause issues

## Best Practices

- Require idempotency keys for mutations
- Validate request body matches stored request
- Set appropriate TTL (24 hours typical)
- Use atomic database operations
- Implement cleanup jobs to prevent table bloat

### TTL Cleanup Strategy

To prevent unbounded table growth, implement periodic cleanup of expired keys:

**Option 1: Scheduled Database Job (PostgreSQL)**
```sql
-- Run hourly via pg_cron or external scheduler
DELETE FROM idempotency_keys
WHERE expires_at < NOW()
LIMIT 1000; -- Batch delete to avoid long locks
```

**Option 2: Application Cleanup Job (Node.js)**
```javascript
// Run via cron or job scheduler (e.g., node-cron, Bull)
async function cleanupExpiredKeys() {
  try {
    const result = await db.query(
      'DELETE FROM idempotency_keys WHERE expires_at < NOW()'
    );
    console.log(`Cleaned up ${result.rowCount} expired idempotency keys`);
  } catch (error) {
    console.error('Cleanup job failed:', error);
  }
}

// Schedule to run every hour
cron.schedule('0 * * * *', cleanupExpiredKeys);
```

**Option 3: Application Cleanup Job (Python)**
```python
import asyncio
from datetime import datetime

async def cleanup_expired_keys():
    """Remove expired idempotency keys to prevent table bloat."""
    try:
        result = await db.execute(
            "DELETE FROM idempotency_keys WHERE expires_at < $1",
            datetime.now()
        )
        print(f"Cleaned up {result} expired idempotency keys")
    except Exception as e:
        print(f"Cleanup job failed: {e}")

# Run with APScheduler, Celery, or similar
# scheduler.add_job(cleanup_expired_keys, 'interval', hours=1)
```

**Cleanup Best Practices:**
- Run cleanup during low-traffic periods to minimize lock contention
- Use batched deletes (`LIMIT 1000`) for large tables
- Monitor cleanup job execution and failures
- Consider partitioning the table by created_at for easier cleanup
- Set up alerts if table size grows unexpectedly

Overview

This skill implements robust idempotency handling for APIs using idempotency keys, Redis caching, and a database-backed pattern. It prevents duplicate processing, handles concurrent requests safely, and stores responses for retries. It is designed for production use in payment, webhook, and other sensitive mutation flows.

How this skill works

Requests that include an idempotency key are checked against a Redis cache or a dedicated database table. If a cached response exists, it is returned immediately. For database-backed handling, a row is inserted with a processing status using an atomic upsert; the worker that wins processes the operation, records the completed response or a failure, and subsequent requests read that final state. Cleanup jobs or TTLs remove expired keys to avoid table bloat.

When to use it

  • Payment processing and refunds where duplicate charges must be avoided
  • Order creation or inventory changes that must be performed exactly once
  • Webhook handlers that may receive retries from third-party services
  • Email or notification sends where duplicates are undesirable
  • Any mutation endpoint exposed to retries, race conditions, or network failures

Best practices

  • Require idempotency keys for all mutation endpoints and reject missing keys by policy
  • Validate incoming request body/hash against the stored request_hash before returning a cached response
  • Set a reasonable TTL (24 hours is typical) and persist responses for reliable retries
  • Use atomic DB operations (INSERT ... ON CONFLICT DO NOTHING) to ensure a single processor handles work
  • Run periodic cleanup jobs with batched deletes and schedule during low-traffic windows

Example use cases

  • Payment gateway endpoint that must avoid double-charging customers on client retries
  • Webhook receiver that safely de-duplicates repeated delivery attempts from external services
  • Order creation API that ensures inventory and order rows are created exactly once
  • Background job that retries external API calls and returns cached success or failure payloads
  • API gateway that stores and replays responses for clients after transient failures

FAQ

What if two requests use the same idempotency key but different payloads?

Validate the request hash stored with the key. If the hash differs, reject the request and surface an explicit error indicating key reuse with a different request.

How long should I keep idempotency state?

Keep state long enough to cover expected client retry windows; 24 hours is common. Use TTLs and scheduled cleanup to purge older keys and avoid table growth.