home / skills / velcrafting / codex-skills / persistence-layer-change

persistence-layer-change skill

/skills/backend/persistence-layer-change

This skill manages safe persistence-layer changes by guiding schema migrations, data-access updates, and rollout plans that prevent data loss.

npx playbooks add skill velcrafting/codex-skills --skill persistence-layer-change

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.5 KB
---
name: persistence-layer-change
description: Implement schema/migration changes and update data access patterns safely.
metadata:
  short-description: Schema + migration + safe access updates
  layer: backend
  mode: write
  idempotent: false
---

# Skill: backend/persistence-layer-change

## Purpose
Change persistence safely by managing:
- schema changes (tables/columns/indexes)
- migrations
- updated read/write paths
- compatibility and rollout strategy

This skill is responsible for preventing data loss and breaking changes.

---

## Inputs
- Desired schema change (add/alter/remove)
- Backward compatibility needs:
  - can deploy in one step or requires multi-step rollout
- Data access layers involved (ORM/query builder/raw SQL)
- Repo profile (preferred): `<repo>/REPO_PROFILE.json`

---

## Outputs
- Migration(s) or schema change artifact(s)
- Updated data access code to use new schema
- Tests updated or added
- Notes on rollout if multi-step needed (in comments or docs if repo standard)

---

## Non-goals
- Business logic changes unrelated to the schema
- Endpoint wiring (unless required for compilation)
- Rewriting the persistence layer wholesale

---

## Workflow
1) Identify existing migration mechanism and conventions.
2) Choose safe change strategy:
   - additive change first (preferred)
   - dual-write or backfill if required
   - destructive change only with explicit migration plan
3) Implement migration with deterministic up/down behavior when supported.
4) Update data access paths:
   - reads tolerate both states during rollout if needed
   - writes remain consistent
5) Add/adjust indexes carefully (avoid accidental perf regressions).
6) Add tests:
   - migration applies
   - reads/writes work
7) Run required validations from profile.

---

## Checks
- Migration applies cleanly in a fresh environment (as supported by repo)
- No destructive change without explicit plan
- Tests pass (and cover at least one new access path)
- Typecheck/lint pass if configured
- Performance hazards considered (indexes, N+1, full scans)

---

## Failure modes
- Migration framework missing → stop and recommend establishing it before change.
- Destructive change requested → require staged plan and rollback notes.
- Unknown production constraints → recommend conservative additive migration first.

---

## Telemetry
Log:
- skill: `backend/persistence-layer-change`
- migration_type: `additive | staged | destructive`
- artifacts_written: migration file(s)
- files_touched
- outcome: `success | partial | blocked`

Overview

This skill implements schema and migration changes and updates data access patterns safely to avoid data loss and runtime breakage. It produces migration artifacts, updates read/write code paths, and adds tests and rollout notes when multi-step deployments are required. The goal is deterministic, reversible changes with conservative rollout strategies.

How this skill works

The skill inspects the repository to find the migration framework, ORM or raw SQL usage, and any repository profile that defines validations. It chooses a safe change strategy (additive-first, dual-write/backfill, or staged destructive change), generates migrations with deterministic up/down behavior, and updates data access logic so reads and writes remain compatible during rollout. Finally it adds or updates tests and reports validations and telemetry.

When to use it

  • Adding, renaming, or removing columns or tables
  • Introducing or changing indexes that affect query plans
  • Refactoring read/write code to use a new schema layout
  • Performing backfills or dual-write rollouts for large datasets
  • When you must ensure zero-downtime or reversible schema changes

Best practices

  • Prefer additive changes first (new columns/tables/indexes) to preserve compatibility
  • Plan destructive changes as staged rollouts with explicit backfill and rollback steps
  • Ensure migrations are deterministic with clear up and down behavior when supported
  • Make reads tolerant of both old and new shapes during rollout; keep writes consistent
  • Add tests that apply migrations on a fresh environment and exercise the new access paths
  • Validate typechecks, lints, and performance implications (indexes, N+1, full scans)

Example use cases

  • Add a nullable column, backfill values in a background job, then make it non-nullable in a later deployment
  • Introduce a new indexed lookup table and switch reads to join with it while keeping legacy reads until rollout completes
  • Split a large table into two for performance and implement dual-write plus a consumer verifying data parity before cutover
  • Rename a column by adding a new column, updating accessors, backfilling, and then dropping the old column in a final migration

FAQ

What if the repo has no migration framework?

Stop and recommend adding a lightweight, supported migration mechanism before making schema changes; proceed only after establishing it.

When is destructive change acceptable?

Only with an explicit staged plan that includes backfill, verification, and rollback instructions; document and test each stage.