home / skills / harborgrid-justin / lexiflow-premium / suspense-data-architectures
/frontend/.github-skills/suspense-data-architectures
This skill helps you architect suspenseful data fetching with Suspense, streaming SSR, and granular cache invalidation for responsive UIs.
npx playbooks add skill harborgrid-justin/lexiflow-premium --skill suspense-data-architecturesReview the files below or copy the command above to add this skill to your agents.
---
name: suspense-data-architectures
description: Engineer data-fetching architectures that fully leverage `Suspense`, streaming SSR, and granular cache invalidation.
---
# Suspense Data Architectures (React 18)
## Summary
Engineer data-fetching architectures that fully leverage `Suspense`, streaming SSR, and granular cache invalidation.
## Key Capabilities
- Build a resource cache with deterministic invalidation boundaries.
- Compose data dependencies across micro-frontends without waterfalling.
- Integrate streaming SSR with client hydration and error recovery.
## PhD-Level Challenges
- Formalize dependency graphs and compute optimal prefetch sets.
- Prove bounded revalidation strategies under concurrent updates.
- Analyze cache coherence trade-offs using real-world latency traces.
## Acceptance Criteria
- Demonstrate Suspense-enabled data loading with abortable fetches.
- Implement error boundaries that isolate failed data segments.
- Provide a diagram of dependency graph and cache invalidation paths.
This skill engineers data-fetching architectures that fully leverage React Suspense, streaming server-side rendering (SSR), and granular cache invalidation. It focuses on deterministic resource caching, composing multi-source dependencies without waterfalling, and robust client hydration with error isolation. The design targets high-availability frontends that must deliver progressive UI updates and precise revalidation guarantees.
The skill builds a resource cache layer with clear invalidation boundaries and abortable fetch semantics so Suspense boundaries can suspend and resume without leaking requests. It composes dependency graphs across micro-frontends to compute non-waterfall prefetch sets and wires streaming SSR to progressively send resolved UI fragments while preserving hydration integrity. Error boundaries are scoped to data segments to contain failures and allow neighboring UI to continue rendering.
How does this approach avoid waterfall data fetching?
By modeling dependencies explicitly and computing non-overlapping prefetch sets, components suspend independently so parallel fetches proceed without sequential waits.
Can cache invalidation be targeted to a single component?
Yes. Resources are keyed with deterministic boundaries and expose targeted invalidation methods so only affected segments revalidate.