home / skills / velcrafting / codex-skills / frontend-test-additions

frontend-test-additions skill

/skills/frontend/frontend-test-additions

This skill adds frontend tests to prove UI behavior and prevent regressions across components and routes.

npx playbooks add skill velcrafting/codex-skills --skill frontend-test-additions

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.3 KB
---
name: frontend-test-additions
description: Add or extend frontend tests to cover intended behavior and critical UI states.
metadata:
  short-description: Frontend tests
  layer: frontend
  mode: write
  idempotent: false
---

# Skill: frontend/frontend-test-additions

## Purpose
Add frontend tests that prove behavior, prevent regressions, and validate UI states.
Prefer tests that are stable and aligned with repo testing conventions.

---

## Inputs
- Target component/route/flow
- Behavior to prove (acceptance criteria)
- Existing test framework and patterns (from repo or `REPO_PROFILE.json`)
- Mocking strategy (MSW, fetch mocks, fixtures) if present

---

## Outputs
- New or updated tests
- Supporting fixtures/mocks if needed
- Optional: test helper utilities if consistent with repo patterns

---

## Non-goals
- Refactoring unrelated UI code
- Adding a new test framework
- Snapshot-only testing unless repo standard and justified

---

## Workflow
1) Identify test stack:
   - unit/component tests (testing-library)
   - e2e tests (playwright/cypress)
   - choose the highest-value lowest-flake layer available
2) Define minimum test set (default):
   - happy path render
   - at least one error or empty state
   - at least one interaction (if interactive)
3) Prefer role/name queries over brittle selectors.
4) Mock data at the boundary:
   - API layer via MSW/mocks
   - avoid mocking internal implementation details
5) Add tests incrementally and keep them deterministic.
6) Run tests using repo commands.

---

## Checks
- Tests pass locally (or explain deterministic alternative if tests cannot run)
- Tests validate intended user-observable behavior
- Tests cover at least two states when data-driven:
  - success + (error or empty)
- Minimal flakiness risk:
  - avoid time-based waits without reason
  - prefer explicit awaits on UI changes

---

## Failure modes
- Test commands unknown → consult `REPO_PROFILE.json` or recommend `personalize-repo`.
- Flaky tests appear → stabilize by removing timing dependence and improving mocks.
- Difficult to test due to tight coupling → recommend extraction or boundary mocking.

---

## Telemetry
Log:
- skill: `frontend/frontend-test-additions`
- test_type: `unit | component | e2e`
- states_covered: `success | loading | error | empty` (subset)
- files_touched
- outcome: `success | partial | blocked`

Overview

This skill helps add or extend frontend tests that prove behavior, prevent regressions, and validate critical UI states. It prioritizes stable, deterministic tests that follow the repository's existing testing conventions and tooling. The goal is practical coverage: render, error/empty states, and at least one user interaction when relevant.

How this skill works

I inspect the target component, route, or user flow and map it to the repo's test stack (unit/component/e2e). I define a minimal, high-value test set and create tests, fixtures, and lightweight helpers consistent with existing patterns. Tests mock external boundaries (APIs) rather than internal implementation, and I run them with the repository's standard commands to ensure determinism and low flakiness.

When to use it

  • When a UI change needs verification to prevent regressions
  • When a critical user flow lacks coverage (happy + error/empty states)
  • When a component shows intermittent bugs and needs deterministic tests
  • When adding features that affect data-driven states or interactions
  • When onboarding tests to a component following repo patterns

Best practices

  • Pick the highest-value, lowest-flake layer (component vs e2e) for the change
  • Cover at least two data-driven states: success plus error or empty
  • Query elements by role/name instead of fragile DOM selectors
  • Mock at the boundary (MSW or fetch mocks) and avoid internal implementation mocks
  • Avoid time-based waits; await explicit UI changes to reduce flakiness

Example use cases

  • Add component tests for a list view: render, empty list, and item click
  • Add an e2e test for signup flow: successful submission and validation errors
  • Extend API-driven component tests using MSW to simulate success and 500 error
  • Create helper fixtures when multiple tests need the same mocked responses
  • Stabilize flaky tests by replacing timeouts with awaits on DOM updates

FAQ

What if tests fail locally due to unknown commands?

Check the repo profile for test commands or request a personalize-repo step to surface correct scripts.

Should I add a new test framework for coverage?

No. Use the existing framework in the repo. Adding frameworks is out of scope.