home / skills / hoangnguyen0403 / agent-skills-standard / caching

caching skill

/skills/nestjs/caching

This skill helps you implement multi-level caching in NestJS with Redis, providing stale-while-revalidate patterns, proper key management, and stampede

npx playbooks add skill hoangnguyen0403/agent-skills-standard --skill caching

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.5 KB
---
name: NestJS Caching & Redis
description: Multi-level caching, Invalidation patterns, and Stampede protection.
metadata:
  labels: [nestjs, caching, redis, performance]
  triggers:
    files: ['**/*.service.ts', '**/*.interceptor.ts']
    keywords: [CacheInterceptor, CacheTTL, Redis, stale-while-revalidate]
---

# Caching & Redis Standards

## **Priority: P1 (OPERATIONAL)**

Caching strategies and Redis integration patterns for high-performance NestJS applications.

## Caching Strategy

- **Layering**: Use **Multi-Level Caching** for high-traffic read endpoints.
  - **L1 (Local)**: In-Memory (Node.js heap). Ultra-fast, no network. Ideal for config/static data. Use `lru-cache`.
  - **L2 (Distributed)**: Redis. Shared across pods.
- **Pattern**: Implement **Stale-While-Revalidate** where possible to avoid latency spikes during cache misses.

## NestJS Implementation

- **Library**: Use `cache-manager` with `cache-manager-redis-yet` (Recommended over `cache-manager-redis-store` for better V4 support and stability).
- **Interceptors**: Use `@UseInterceptors(CacheInterceptor)` for simple GET responses.
  - **Warning**: By default, this uses the URL as the key. Ensure consistent query param ordering or custom key generators.
- **Decorators**: Standardize custom cache keys.

  ```typescript
  @CacheKey('users_list')
  @CacheTTL(300) // 5 minutes
  findAll() { ... }
  ```

## Redis Data Structures (Expert)

- Don't just use `GET/SET`.
- **Hash (`HSET`)**: Storing objects (User profiles). Allows partial updates (`HSET user:1 lastLogin result`) without serialization overhead.
- **Set (`SADD`)**: Unique collections (e.g., "Online User IDs"). O(1) membership checks.
- **Sorted Set (`ZADD`)**: Priority queues, Leaderboards, or Rate Limiting windows.

## Invalidation Patterns

- **Problem**: "There are only two hard things in Computer Science: cache invalidation and naming things."
- **Tagging**: Since Redis doesn't support wildcards efficiently (`KEYS` is O(N) - bans in PROD), use **Sets** to group keys.
  - _Create_: `SADD post:1:tags cache:post:1`
  - _Invalidate_: Fetch tags from Set, then `DEL` usage keys.
- **Event-Driven**: Listen to Domain Events (`UserUpdated`) to trigger invalidation asynchronously.

## Stampede Protection

- **Jitter**: Add random variance to TTLs (e.g., 300s ± 10s) to prevent all keys expiring simultaneously.
- **Locking**: If a key is missing, **one** process computes it while others wait or return stale. (Complex, often handled by `swr` libraries).

Overview

This skill documents practical NestJS caching and Redis patterns for high-performance services. It focuses on multi-level caching, safe invalidation, and cache stampede protection to reduce latency and load. The guidance is framework-focused and ready to apply in TypeScript NestJS projects.

How this skill works

It prescribes a two-layer cache: L1 in-memory for ultra-fast local reads and L2 Redis for shared, cross-pod state. Implementations use cache-manager with a Redis adapter, standardized cache keys and TTLs, and interceptors or decorators to apply caching consistently. Advanced patterns include Redis data structures, tag-based invalidation via Sets, and stampede protection via jitter and single-writer locks.

When to use it

  • High-read, low-write endpoints where reducing latency and DB load matters.
  • Config or static data that benefits from ultra-fast local reads (L1).
  • Distributed deployments (multiple pods) that need shared cache coherence (L2).
  • Endpoints that risk cache stampedes during mass expiration or traffic spikes.
  • Use-cases where partial updates benefit from Redis Hashes or Sets.

Best practices

  • Implement multi-level caching: L1 in-memory (lru-cache) + L2 Redis for distribution.
  • Standardize cache keys and use custom key generators to avoid URL/query instability.
  • Prefer cache-manager with cache-manager-redis-yet for modern Node v4+ compatibility.
  • Group keys with Redis Sets for efficient tag-based invalidation instead of KEYS.
  • Add TTL jitter and single-writer locking for stampede protection; return stale while revalidating where acceptable.

Example use cases

  • Caching user lists or product catalogs with @CacheKey and @CacheTTL for consistent keys and expirations.
  • Storing user profiles in Redis Hashes to allow partial HSET updates without full serialization.
  • Tracking online users with Redis Sets (SADD) for O(1) membership checks and efficient invalidation.
  • Implementing leaderboards or time-windowed rate limits using Sorted Sets (ZADD).
  • Event-driven invalidation (UserUpdated) to asynchronously DEL or update only affected keys.

FAQ

Should I always use both L1 and L2 caches?

Not always. Use L1 for ultra-low-latency, per-instance reads and L2 when you need shared state. Evaluate memory cost and coherence needs before adding L1.

How do I avoid expensive Redis KEY scans in production?

Avoid KEYS. Maintain Sets that group related keys at write-time and use those Sets to lookup and DEL keys for invalidation.