home / skills / sergiodxa / agent-skills / frontend-async-best-practices

frontend-async-best-practices skill

/skills/frontend-async-best-practices

This skill helps optimize asynchronous JavaScript by applying parallelism and branch-aware awaits to eliminate waterfalls and speed up data fetching.

npx playbooks add skill sergiodxa/agent-skills --skill frontend-async-best-practices

Review the files below or copy the command above to add this skill to your agents.

Files (6)
SKILL.md
3.2 KB
---
name: frontend-async-best-practices
description: Async/await and Promise optimization guidelines. Use when writing, reviewing, or refactoring asynchronous code to eliminate waterfalls and maximize parallelism. Triggers on tasks involving data fetching, loaders, actions, or Promise handling.
---

# Async Best Practices

Performance optimization patterns for asynchronous JavaScript code. Contains 5 rules focused on eliminating request waterfalls and maximizing parallelism.

**Impact: CRITICAL** - Waterfalls are the #1 performance killer. Each sequential await adds full network latency.

## When to Apply

Reference these guidelines when:

- Writing Remix loaders or actions
- Implementing data fetching logic
- Working with multiple async operations
- Reviewing code for waterfall patterns
- Optimizing response times

## Rules Summary

### parallel (CRITICAL) — @rules/parallel.md

Use `Promise.all()` for independent operations.

```typescript
// Bad: 3 sequential round trips
const user = await fetchUser();
const posts = await fetchPosts();
const comments = await fetchComments();

// Good: 1 parallel round trip
const [user, posts, comments] = await Promise.all([
  fetchUser(),
  fetchPosts(),
  fetchComments(),
]);
```

### defer-await (HIGH) — @rules/defer-await.md

Move await into branches where actually used.

```typescript
// Bad: always waits even when skipping
async function handle(skip: boolean) {
  let data = await fetchData();
  if (skip) return { skipped: true };
  return process(data);
}

// Good: only waits when needed
async function handle(skip: boolean) {
  if (skip) return { skipped: true };
  let data = await fetchData();
  return process(data);
}
```

### dependencies (CRITICAL) — @rules/dependencies.md

Chain dependent operations, parallelize independent ones.

```typescript
// Bad: profile waits for config unnecessarily
const [user, config] = await Promise.all([fetchUser(), fetchConfig()]);
const profile = await fetchProfile(user.id);

// Good: profile starts as soon as user resolves
const userPromise = fetchUser();
const profilePromise = userPromise.then((user) => fetchProfile(user.id));

const [user, config, profile] = await Promise.all([
  userPromise,
  fetchConfig(),
  profilePromise,
]);
```

### api-routes (CRITICAL) — @rules/api-routes.md

Start promises early, await late in loaders.

```typescript
// Bad: sequential execution
export async function loader() {
  let session = await auth();
  let config = await fetchConfig();
  return { session, config };
}

// Good: parallel execution
export async function loader() {
  let sessionPromise = auth();
  let configPromise = fetchConfig();
  const [session, config] = await Promise.all([sessionPromise, configPromise]);
  return { session, config };
}
```

### suspense-boundaries (HIGH) — @rules/suspense-boundaries.md

Use Suspense to show UI immediately while data loads.

```tsx
// Bad: entire page blocked by data
async function Page() {
  let data = await fetchData();
  return (
    <Layout>
      <Content data={data} />
    </Layout>
  );
}

// Good: layout shows immediately, content streams in
function Page() {
  return (
    <Layout>
      <Suspense fallback={<Skeleton />}>
        <Content />
      </Suspense>
    </Layout>
  );
}
```

Overview

This skill delivers concise async/await and Promise optimization guidelines to eliminate request waterfalls and maximize parallelism. It focuses on patterns for loaders, actions, fetch logic, and Promise handling that materially improve response time. Use it when writing, reviewing, or refactoring asynchronous front-end code to reduce latency and surface UI faster.

How this skill works

The skill inspects async flows and recommends transformations: start independent promises earlier, combine them with Promise.all, defer awaiting until a value is actually needed, and chain dependent calls rather than serializing unrelated ones. It flags common waterfall patterns in loaders, API routes, and component code and suggests Suspense-based UI boundaries to stream content progressively. Advice is actionable and tied to concrete code patterns so you can apply fixes quickly.

When to use it

  • Writing or reviewing Remix loaders or actions
  • Implementing or refactoring data fetching logic
  • Handling multiple independent async operations
  • Optimizing API routes or server-side loaders for latency
  • Adding progressive rendering with Suspense

Best practices

  • Parallelize independent calls with Promise.all to avoid sequential network round trips
  • Start promises early and await late so multiple tasks overlap
  • Defer awaiting until inside the branch that actually needs the result
  • Chain dependent operations (use .then or async/await where necessary) so only true dependencies block
  • Use Suspense boundaries to render layout immediately and stream heavy data into placeholders

Example use cases

  • Refactor a loader that currently awaits each fetch sequentially into a single Promise.all call
  • Change a form action to start fetches early and await all results at the end of the action
  • Move an await out of an unconditional path so inexpensive branches skip network work
  • Create Suspense-wrapped components so the page layout shows instantly while data loads
  • Optimize an API route by starting authentication and config fetches in parallel

FAQ

Will Promise.all ever slow things down?

No—Promise.all runs promises concurrently; it only waits for the slowest to finish. It’s appropriate when operations are independent. For partial failure handling, use Promise.allSettled or individual try/catch.

When should I avoid parallelizing calls?

Avoid parallelizing when calls are logically dependent (one requires the result of another) or when external rate limits enforce sequencing. In those cases chain the dependent call and parallelize only independent work.