home / skills / a5c-ai / babysitter / migration-validator

This skill helps validate functional equivalence after migration by performing side-by-side comparisons, diffing outputs, and behavioral verification.

npx playbooks add skill a5c-ai/babysitter --skill migration-validator

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
2.5 KB
---
name: migration-validator
description: Validate functional equivalence after migration with side-by-side comparison and behavioral verification
allowed-tools: ["Bash", "Read", "Write", "Grep", "Glob", "Edit"]
---

# Migration Validator Skill

Validates functional equivalence between source and target systems after migration through comprehensive comparison and behavioral verification.

## Purpose

Enable migration validation for:
- Side-by-side comparison
- Output diffing
- Behavioral verification
- Data consistency checking
- Acceptance criteria verification

## Capabilities

### 1. Side-by-Side Comparison
- Run parallel requests
- Compare responses
- Track differences
- Document discrepancies

### 2. Output Diffing
- Compare API responses
- Diff file outputs
- Check data formats
- Validate transformations

### 3. Behavioral Verification
- Test user flows
- Verify business logic
- Check edge cases
- Validate error handling

### 4. Data Consistency Checking
- Compare data states
- Verify calculations
- Check relationships
- Validate constraints

### 5. Integration Validation
- Test external integrations
- Verify API contracts
- Check message flows
- Validate events

### 6. Acceptance Criteria Verification
- Check feature completeness
- Verify requirements
- Validate user stories
- Document coverage

## Tool Integrations

| Tool | Purpose | Integration Method |
|------|---------|-------------------|
| Diffy | Response comparison | API |
| Contract testing | API verification | CLI |
| Cypress | E2E validation | CLI |
| Playwright | Browser testing | CLI |
| Custom validators | Business rules | CLI |

## Output Schema

```json
{
  "validationId": "string",
  "timestamp": "ISO8601",
  "source": {
    "environment": "string",
    "version": "string"
  },
  "target": {
    "environment": "string",
    "version": "string"
  },
  "results": {
    "total": "number",
    "passed": "number",
    "failed": "number",
    "skipped": "number"
  },
  "comparisons": [
    {
      "test": "string",
      "status": "passed|failed",
      "source": {},
      "target": {},
      "differences": []
    }
  ],
  "acceptance": {
    "criteria": [],
    "met": "boolean"
  }
}
```

## Integration with Migration Processes

- **migration-testing-strategy**: Validation execution
- **parallel-run-validation**: Parallel comparison

## Related Skills

- `performance-baseline-capturer`: Performance comparison
- `data-migration-validator`: Data validation

## Related Agents

- `parallel-run-validator`: Parallel validation
- `regression-detector`: Regression detection

Overview

This skill validates functional equivalence after a migration by running side-by-side comparisons and behavioral checks between source and target systems. It focuses on detecting response differences, verifying business logic, and ensuring data consistency so teams can trust migration outcomes.

How this skill works

The skill executes parallel requests and captures outputs from both environments, then performs structured diffs on API responses, files, and data states. It runs behavioral tests and edge-case scenarios, verifies integrations and contracts, and emits a standardized validation report with per-test comparisons and an overall acceptance verdict.

When to use it

  • After migrating services, APIs, or databases to a new environment or version
  • When you need automated side-by-side verification during canary or parallel-run deployments
  • To validate behavioral parity for critical user flows and business rules
  • Before cutover to ensure data consistency and integration contracts are intact
  • During regression test runs to detect subtle functional changes post-migration

Best practices

  • Define clear acceptance criteria and map tests to user stories before running validation
  • Run parallel requests against representative data sets and production-like environments
  • Include contract and integration tests alongside behavioral scenarios for full coverage
  • Capture deterministic inputs and timestamps so comparisons remain reproducible
  • Triage differences into functional regressions, acceptable deltas, and environment noise

Example use cases

  • Compare API responses between legacy and reimplemented endpoints during an API rewrite
  • Verify UI-driven flows and error handling after migrating a web app using Playwright or Cypress
  • Validate database migration by comparing record counts, relationships, and computed fields
  • Confirm message flows and event integrity when moving to a new messaging platform
  • Run acceptance verification as part of a CI/CD pipeline to gate cutover decisions

FAQ

What formats of output can it diff?

It diffs structured API responses, JSON/XML payloads, files, and derived data models using configurable comparison rules.

How are false positives reduced?

By normalizing timestamps, IDs, and environment-specific fields, and by allowing configurable tolerance rules for acceptable deltas.