home / skills / velcrafting / codex-skills / data-fetching-integration

data-fetching-integration skill

/skills/frontend/data-fetching-integration

This skill wires a UI to a data source using repo tooling with typed inputs/outputs and robust loading, error, and caching behavior.

npx playbooks add skill velcrafting/codex-skills --skill data-fetching-integration

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.8 KB
---
name: data-fetching-integration
description: Wire UI to a data source with typed inputs/outputs, loading/error/empty states, and caching rules.
metadata:
  short-description: Data hookup + UI states
  layer: frontend
  mode: write
  idempotent: false
---

# Skill: frontend/data-fetching-integration

## Purpose
Connect UI to an API/data source using the repo’s data tooling, ensuring:
- typed inputs/outputs
- correct UI state handling
- explicit caching and invalidation behavior

---

## Inputs
- Data source definition:
  - endpoint contract (method/path/request/response) OR
  - client function signature OR
  - data description (if contract not formalized yet)
- UI entry point(s) that need the data
- Expected behavior:
  - read vs write
  - polling or realtime needs
  - optimistic updates allowed (yes/no)
- Repo profile (preferred): `<repo>/REPO_PROFILE.json`

---

## Outputs
- Data call wired via repo tooling (React Query/SWR/custom)
- A hook / loader / data module consistent with repo patterns
- UI state coverage:
  - loading
  - error
  - empty
  - success
- Explicit cache key strategy and invalidation/refetch rules (code comments or helper constants)

---

## Non-goals
- Changing backend behavior or contracts
- Adding business rules client-side beyond presentation and basic form validation
- Building a new data layer abstraction unless explicitly requested

---

## Workflow
1) Identify the repo’s data fetching pattern (prefer `REPO_PROFILE.json`).
2) Implement the data call using existing client conventions.
3) Define a stable cache key derived from request inputs.
4) Add invalidation rules:
   - mutations invalidate queries that depend on changed data
   - avoid global invalidation unless required
5) Implement UI states:
   - loading: skeleton/spinner (repo standard)
   - error: recoverable messaging + retry path if appropriate
   - empty: explicit “no data” state
   - success: render
6) Run required validations (typecheck/lint/tests per profile).

---

## Checks
- Typecheck passes (if configured)
- UI handles loading/error/empty/success states deterministically
- Cache key is stable and unique
- Invalidation/refetch behavior is explicit
- No silent swallowing of errors

---

## Failure modes
- Data tooling unclear → ask which is used (React Query/SWR/custom) and default caching expectations.
- Stale cache → fix keys and invalidation rules; avoid ad-hoc refetch loops.
- Race conditions → add latest-only guards, cancellation, or stable request identity.
- Contract ambiguity → recommend `api/contract-update` or `shared/schema-types` before hard wiring.

---

## Telemetry
Log:
- skill: `frontend/data-fetching-integration`
- data_tooling: `react-query | swr | custom | unknown`
- cache_keys_added_or_changed
- files_touched
- outcome: `success | partial | blocked`

Overview

This skill wires a UI to a data source using the repository’s existing data tooling, producing typed inputs/outputs, deterministic cache keys, and explicit invalidation rules. It ensures the UI covers loading, error, empty, and success states and includes telemetry and validations required by the repo profile. The focus is predictable client-side behavior without changing backend contracts.

How this skill works

I inspect the repo profile (REPO_PROFILE.json) or existing client code to determine the data-fetching library and conventions. I implement a hook/loader or data module that calls the API client, emits typed responses, and defines a stable cache key and invalidation rules. The implementation includes deterministic UI states (loading, error, empty, success), typechecks, and comments/constants documenting caching strategy and refetch triggers.

When to use it

  • You need a UI entry wired to an API or client function with typed inputs/outputs.
  • The repository has an established data tooling pattern (React Query, SWR, or custom) to follow.
  • You require explicit caching and invalidation behavior for consistent UX and testability.
  • The UI must deterministically show loading, error, empty, and success states.
  • You want telemetry and repo-profile driven validations to be included.

Best practices

  • Prefer using REPO_PROFILE.json to discover repo conventions and lint/test commands before coding.
  • Derive cache keys from stable request inputs and avoid using transient values like Date.now().
  • Invalidate only the queries affected by a mutation; avoid broad global invalidation unless justified.
  • Implement latest-only guards or cancellation for competing requests to prevent race conditions.
  • Document cache keys and invalidation rules in code comments or helper constants for future maintainers.

Example use cases

  • Create a useOrders hook that fetches orders with a stable key based on customerId and filters, handles empty lists, and invalidates on order mutations.
  • Wire a dashboard widget to a realtime feed where polling is needed; implement polling controls and a clear cache key strategy.
  • Integrate a form that performs optimistic updates: update UI immediately, submit mutation, and invalidate the specific query on error or success.
  • Replace ad-hoc fetch calls with a repo-consistent loader module that passes typechecks and adds telemetry entries.

FAQ

What if the repo data tooling is unclear?

Ask which library to use (React Query, SWR, or custom). Default to the pattern seen in the repo profile or follow the most prevalent client implementation; document the choice in the PR.

How do you prevent stale caches or race conditions?

Use stable cache keys, explicit invalidation rules, and latest-only guards or request cancellation. Avoid ad-hoc refetch loops and prefer targeted invalidation on related mutations.