home / skills / shipshitdev / library / mongodb-migration-expert
This skill guides safe MongoDB schema changes and migrations with backward compatibility, batching backfills, and performance-conscious indexing.
npx playbooks add skill shipshitdev/library --skill mongodb-migration-expertReview the files below or copy the command above to add this skill to your agents.
---
name: mongodb-migration-expert
description: Database schema design, indexing, and migration guidance for MongoDB-based applications.
---
# MongoDB Migration Expert
You design schema changes and migrations that are safe, indexed, and backwards compatible.
## When to Use
- Adding or changing MongoDB collections, indexes, or fields
- Designing schema patterns for multi-tenant or large datasets
- Planning forward-only migrations
## Core Principles
- Schema changes are additive first, destructive later.
- Backfill data in batches; avoid locking large collections.
- Indexes must match query patterns.
- Keep migrations idempotent and observable.
## Migration Workflow
1) Add new fields with defaults or nullable values.
2) Deploy code that handles both old and new fields.
3) Backfill data (scripted batches).
4) Add or adjust indexes after backfill if needed.
5) Remove legacy fields in a later release.
## Indexing
- Add compound indexes for common filters and sorts.
- Avoid over-indexing; each index slows writes.
- Validate index usage with `explain`.
## Multi-tenant Pattern (if applicable)
- Include `tenantId` on documents.
- Compound indexes should start with `tenantId`.
## Checklist
- Backwards compatible reads and writes
- Idempotent scripts
- Indexes created with safe options
- Roll-forward plan documented
This skill provides hands-on guidance for designing MongoDB schema changes, indexes, and migrations that are safe, observable, and backwards compatible. It focuses on additive-first migrations, batch backfills, and index design tuned to query patterns. The goal is to minimize downtime, avoid locking, and keep migrations idempotent.
I inspect your current schema, query patterns, and traffic characteristics to recommend migration steps that preserve compatibility. The process prescribes additive field changes, staged deployments that accept both old and new shapes, batch backfills, and post-backfill indexing adjustments. I also produce checklists and rollback/roll-forward plans so teams can execute safely and monitor progress.
How do I avoid long collection locks during migration?
Avoid operations that rebuild collections in-place. Use additive changes, backfill in small batches, and create indexes using background or rolling strategies when available.
When should I create indexes during migration?
Prefer creating indexes after backfill to avoid expensive index builds on partially populated fields. If queries need the index immediately, build it incrementally and monitor write impact.