home / skills / a5c-ai / babysitter / event-sourcing-migrator

This skill migrates applications to event-sourcing architectures by extracting events, configuring stores, generating projections, and implementing CQRS.

npx playbooks add skill a5c-ai/babysitter --skill event-sourcing-migrator

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
2.2 KB
---
name: event-sourcing-migrator
description: Migrate to event-sourcing architecture with event extraction, store setup, and CQRS implementation
allowed-tools: ["Bash", "Read", "Write", "Grep", "Glob", "Edit"]
---

# Event Sourcing Migrator Skill

Migrates applications to event-sourcing architecture, handling event extraction from existing data, event store setup, and CQRS implementation.

## Purpose

Enable event sourcing migration for:
- Event extraction from existing data
- Event store setup
- Projection generation
- CQRS implementation
- Snapshot management

## Capabilities

### 1. Event Extraction from Existing Data
- Analyze current state
- Derive historical events
- Generate event streams
- Handle data gaps

### 2. Event Store Setup
- Configure event store
- Set up partitioning
- Define retention
- Implement subscriptions

### 3. Projection Generation
- Create read models
- Build projections
- Handle updates
- Manage consistency

### 4. CQRS Implementation
- Separate read/write
- Implement commands
- Handle queries
- Manage eventual consistency

### 5. Snapshot Management
- Define snapshot strategy
- Generate snapshots
- Handle restoration
- Optimize performance

### 6. Event Replay
- Replay events
- Rebuild projections
- Handle migrations
- Test consistency

## Tool Integrations

| Tool | Purpose | Integration Method |
|------|---------|-------------------|
| EventStore | Event database | CLI/API |
| Axon Framework | Java event sourcing | Library |
| Marten | .NET event store | Library |
| EventStoreDB | Event store | CLI |
| Custom stores | PostgreSQL/Kafka | Library |

## Output Schema

```json
{
  "migrationId": "string",
  "timestamp": "ISO8601",
  "eventStore": {
    "type": "string",
    "streams": "number",
    "events": "number"
  },
  "projections": [
    {
      "name": "string",
      "status": "string",
      "lastPosition": "number"
    }
  ],
  "snapshots": {
    "enabled": "boolean",
    "count": "number"
  }
}
```

## Integration with Migration Processes

- **monolith-to-microservices**: Event-driven architecture
- **database-schema-migration**: Data transformation

## Related Skills

- `domain-model-extractor`: Event discovery

## Related Agents

- `data-architect-agent`: Event architecture

Overview

This skill helps teams migrate applications to an event-sourcing architecture by extracting events from existing data, provisioning an event store, and implementing CQRS and projections. It focuses on reproducible migration steps, snapshot strategies, and safe event replay to rebuild read models and verify consistency. The skill is implemented in JavaScript and integrates with common event stores and streaming systems.

How this skill works

The migrator analyzes current state and historical data to derive deterministic event streams and fills data gaps with inferred or annotated events. It automates event store setup (partitioning, retention, subscriptions), generates projections and read models, and scaffolds CQRS command and query handling. It also offers snapshot management and controlled event replay to rebuild projections and validate migrations.

When to use it

  • Migrating a monolith to event-driven microservices
  • Replacing a CRUD model with event sourcing for auditability
  • Creating durable, replayable histories for complex domains
  • Rebuilding read models after schema or business logic changes
  • Planning performance improvements using snapshots and partitioning

Best practices

  • Start by extracting a canonical domain model and mapping state to events
  • Annotate inferred events to indicate provenance and uncertainty
  • Use incremental replay and test projections in isolated environments
  • Choose snapshot frequency based on event density and rebuild cost
  • Define retention and partitioning policies before bulk import

Example use cases

  • Deriving customer lifecycle events from legacy billing and CRM tables to enable event-driven analytics
  • Setting up EventStoreDB or PostgreSQL-based stores, with partitioning and subscriptions configured for high throughput
  • Generating projections for order dashboards and implementing CQRS handlers for commands and queries
  • Replaying historical events to validate a schema migration and regenerate read models
  • Creating snapshots for large aggregates to speed up aggregate reconstruction during high-load operations

FAQ

Can it work with existing relational databases?

Yes. The skill analyzes relational state, maps changes to domain events, and supports importing events into relational-backed or dedicated event stores.

How are inferred events handled?

Inferred events are annotated with provenance and confidence metadata so downstream systems can filter, review, or correct them during replay.