home / skills / velcrafting / codex-skills / backend-test-additions

backend-test-additions skill

/skills/backend/backend-test-additions

This skill helps you add focused backend tests to verify invariants, endpoint behavior, and retries across services.

npx playbooks add skill velcrafting/codex-skills --skill backend-test-additions

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.4 KB
---
name: backend-test-additions
description: Add or extend backend tests to prove behavior, invariants, and regressions for services/endpoints/jobs.
metadata:
  short-description: Backend tests
  layer: backend
  mode: write
  idempotent: false
---

# Skill: backend/backend-test-additions

## Purpose
Add backend tests that prove:
- domain invariants
- endpoint behavior and error mapping
- job behavior (idempotency/retries) where applicable
- integration adapter parsing and failure handling

Prefer deterministic, high-signal tests.

---

## Inputs
- Target module/endpoint/job
- Behavior to validate (acceptance criteria)
- Existing test stack and conventions (from repo/profile)
- Mocking strategy (fixtures, fakes, contract tests) if present

---

## Outputs
- New or updated test files
- Supporting fixtures/mocks
- Optional helper utilities if consistent with repo norms

---

## Non-goals
- Adding a new test framework
- Snapshot-only testing unless repo standard
- Broad refactors just to “make it testable” (prefer small extraction)

---

## Workflow
1) Identify test layers available:
   - unit tests (domain)
   - integration tests (endpoint boundary)
   - contract tests (external adapters)
2) Choose the highest-value layer with lowest flake risk.
3) Define minimum test set (default):
   - happy path
   - at least two failure paths:
     - validation/auth/error mapping OR invariant violation OR external failure
4) Mock at boundaries:
   - external APIs mocked at adapter boundary
   - persistence mocked only if repo standard; otherwise use test DB
5) Keep tests deterministic:
   - avoid time-based waits when possible
   - control randomness, time, and ids
6) Run tests using repo commands.

---

## Checks
- Tests pass locally (or a deterministic alternative is documented)
- Coverage includes:
  - happy path
  - at least 2 failure paths
- Tests assert outcomes users/operators care about:
  - status codes, error codes, invariants, retry decisions
- Minimal flakiness risk

---

## Failure modes
- Test commands unknown → consult `REPO_PROFILE.json` or recommend `personalize-repo`.
- Flaky tests → stabilize by removing timing dependence and controlling mocks.
- Hard-to-test coupling → extract minimal module boundary and test there.

---

## Telemetry
Log:
- skill: `backend/backend-test-additions`
- test_type: `unit | integration | contract`
- failure_paths_covered: count
- files_touched
- outcome: `success | partial | blocked`

Overview

This skill helps add or extend backend tests to prove domain invariants, endpoint behavior, job semantics, and adapter parsing/failure handling. It focuses on deterministic, high-signal tests that reduce flakiness while documenting coverage and outcomes. The goal is small, targeted additions that validate behavior and prevent regressions.

How this skill works

You provide the target module, the acceptance criteria to validate, the repo's existing test conventions, and the preferred mocking strategy. The skill selects the most valuable test layer (unit, integration, or contract) with the lowest risk of flakiness, defines a minimal test set (happy path plus at least two failure cases), and generates test files, fixtures, and small helper utilities consistent with repo norms. It also logs telemetry about test type, failure paths covered, files changed, and outcome.

When to use it

  • When a new endpoint, job, or adapter needs behavior guarantees before release
  • When a bug fix requires regression tests to prevent reintroduction
  • When domain invariants must be codified and enforced in tests
  • When adding or changing retry/idempotency behavior for background jobs
  • When adapter parsing or external failure handling needs explicit verification

Best practices

  • Prefer the highest-value layer with the lowest flake risk (unit first, integration when boundary behavior matters)
  • Always include a happy path plus at least two distinct failure paths (validation/auth/error mapping or external failure)
  • Mock external systems at adapter boundaries; use test DBs if mocking persistence is not standard
  • Keep tests deterministic: control randomness, freeze time where needed, and avoid sleep-based waits
  • Make minimal code extractions only when necessary to enable focused testing

Example use cases

  • Add unit tests for domain invariants to ensure business rules never regress
  • Create integration tests verifying endpoint status/error codes and payload mapping
  • Validate job idempotency and retry decisions with controlled failure injection
  • Add contract tests that assert adapter parsing logic and graceful handling of malformed responses
  • Replace a flaky end-to-end check with targeted adapter boundary tests and mocks

FAQ

What minimal test set do you recommend?

A happy path plus at least two failure paths that matter to users or operators, such as validation errors, auth failures, or external dependency failures.

When should I mock persistence versus using a test DB?

Follow the repo's existing conventions: mock persistence if the project standard does so; otherwise prefer a test DB to exercise realistic integration behavior.