home / skills / makfly / superpowers-symfony / doctrine-batch-processing

doctrine-batch-processing skill

/skills/doctrine-batch-processing

This skill helps evolve Symfony Doctrine models and schema safely, optimizing batch processing, integrity, and rollout discipline across migrations and tests.

npx playbooks add skill makfly/superpowers-symfony --skill doctrine-batch-processing

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
1.1 KB
---

name: symfony:doctrine-batch-processing
allowed-tools:
  - Read
  - Write
  - Edit
  - Bash
  - Glob
  - Grep
description: Evolve Symfony Doctrine models and schema safely with integrity, performance, and rollout discipline. Use for doctrine batch processing tasks.
---

# Doctrine Batch Processing (Symfony)

## Use when
- Designing entity relations or schema evolution.
- Improving Doctrine correctness/performance.

## Default workflow
1. Model ownership/cardinality and transactional boundaries.
2. Apply mapping/schema changes with migration safety.
2. Tune fetch/query behavior for hot paths.
2. Verify lifecycle behavior with targeted tests.

## Guardrails
- Keep owning/inverse sides coherent.
- Avoid destructive migration jumps in one release.
- Eliminate accidental N+1 and over-fetching.

## Progressive disclosure
- Use this file for execution posture and risk controls.
- Open references when deep implementation details are needed.

## Output contract
- Entity/migration changes.
- Integrity and performance decisions.
- Validation outcomes and rollback notes.

## References
- `reference.md`
- `docs/complexity-tiers.md`

Overview

This skill helps evolve Symfony Doctrine models and database schema safely, focusing on integrity, performance, and conservative rollout. It guides model ownership, transactional boundaries, mapping changes, and query tuning. It is designed for teams performing Doctrine batch processing and schema evolution with minimal risk.

How this skill works

The skill inspects entity mappings, ownership/ inverse relationships, and migration plans to detect risky or destructive changes. It recommends migration strategies, fetch and query tuning for hot paths, and test patterns to verify lifecycle and transactional behavior. Outputs include precise entity/migration changes, integrity decisions, performance recommendations, and rollback notes.

When to use it

  • Designing or changing entity relationships and ownership
  • Making schema or mapping changes that affect large datasets
  • Optimizing Doctrine queries and fetch strategies for hot code paths
  • Preparing migrations for progressive rollout with integrity guarantees
  • Reviewing batch processing jobs that may trigger N+1 or over-fetching

Best practices

  • Keep owning and inverse sides coherent and explicitly mapped to avoid ambiguous cascades
  • Break destructive migrations into smaller, reversible steps across releases
  • Define clear transactional boundaries for batch jobs to limit lock scope and rollback impact
  • Tune fetch modes and DQL for hot paths; prefer joins with pagination over repeated lazy loads
  • Add targeted integration tests to validate lifecycle callbacks and migration outcomes before rollout

Example use cases

  • Refactoring a many-to-many relationship into an explicit join entity without downtime
  • Converting a nullable column to non-nullable with phased default population and backfill jobs
  • Optimizing an export batch job to eliminate an N+1 by introducing a single join query and controlled pagination
  • Adding a new index and testing its impact on large batch writes before applying globally
  • Designing migration steps that allow safe rollback if integrity violations appear during background processing

FAQ

How do I avoid large downtime when changing ownership or cardinality?

Split the change into multiple releases: add new fields or join entities first, backfill data with safe batch jobs, switch application code to use the new model, then remove old structures in a later release.

What’s the best way to detect N+1 problems before production?

Run focused integration tests and profiling on representative data sizes, inspect executed queries for repeated SELECT patterns, and use logging or tracing to measure query counts per request or job.