home / skills / harborgrid-justin / lexiflow-premium / scalable-routing-and-code-splitting
/frontend/.github-skills/scalable-routing-and-code-splitting
This skill guides scalable routing and code-splitting strategies to reduce load times in large React applications using route and component splitting with
npx playbooks add skill harborgrid-justin/lexiflow-premium --skill scalable-routing-and-code-splittingReview the files below or copy the command above to add this skill to your agents.
---
name: scalable-routing-and-code-splitting
description: Design routing and code-splitting strategies that scale to large applications with minimal load-time overhead.
---
# Scalable Routing and Code Splitting (React 18)
## Summary
Design routing and code-splitting strategies that scale to large applications with minimal load-time overhead.
## Key Capabilities
- Implement route-based and component-based splitting with preloading.
- Coordinate code-splitting with `Suspense` and streaming SSR.
- Optimize preload heuristics based on navigation prediction.
## PhD-Level Challenges
- Model navigation predictions and minimize prefetch waste.
- Prove correctness of lazy-loading error recovery.
- Quantify trade-offs between bundle count and latency.
## Acceptance Criteria
- Demonstrate measurable improvements in initial load time.
- Provide preload heuristics and evaluation results.
- Document error recovery for lazy-loaded routes.
This skill designs routing and code-splitting strategies that scale to large web applications while minimizing load-time overhead. It focuses on route-based and component-level splitting, preloading heuristics, and safe lazy-loading patterns that integrate with Suspense and streaming SSR. The goal is measurable improvement in initial load time and predictable navigation latency.
The skill inspects application routes and component boundaries to generate a splitting plan that balances bundle count and runtime requests. It adds preload and prefetch hooks driven by navigation prediction, coordinates Suspense fallbacks, and integrates with server-side streaming to hydrate incrementally. It also provides error-recovery strategies for lazy-loaded modules and collects metrics to evaluate trade-offs.
How do you avoid prefetching too much and wasting bandwidth?
Use lightweight prediction rules combined with adaptive thresholds and telemetry. Limit concurrent preloads and favor optimistic prefetch only for top-ranked predictions.
What happens if a lazy-loaded route fails to load?
Provide an explicit error boundary that retries fetches with exponential backoff, offers a fallback action, and logs the failure for analysis to ensure safe recovery.