home / skills / doanchienthangdev / omgkit / omega-sprint

omega-sprint skill

/plugin/skills/omega/omega-sprint

This skill orchestrates AI-native sprint management with autonomous agent coordination and continuous delivery, helping teams ship faster with aligned tasks.

npx playbooks add skill doanchienthangdev/omgkit --skill omega-sprint

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
6.8 KB
---
name: managing-omega-sprints
description: Orchestrates AI-native sprint management with autonomous agent coordination and continuous delivery. Use when running development sprints with AI agent teams or coordinating parallel task execution.
category: omega
triggers:
  - omega sprint
  - sprint planning
  - AI team management
  - agent orchestration
---

# Managing Omega Sprints

Execute **AI-native sprint management** with autonomous agent orchestration, intelligent task routing, and continuous delivery cycles.

## Quick Start

```yaml
# 1. Define sprint vision
Vision:
  Objective: "Implement OAuth2 authentication"
  Success: ["3 providers", "95% completion rate", "OWASP compliant"]

# 2. Break into agent-executable tasks
Tasks:
  - { id: "types", agent: "architect", tokens: 5K }
  - { id: "google-oauth", agent: "fullstack", tokens: 8K, depends: ["types"] }
  - { id: "tests", agent: "tester", tokens: 6K, depends: ["google-oauth"] }

# 3. Execute with autonomy level
Execution:
  Autonomy: "semi-auto"
  Checkpoints: ["phase-complete", "error-threshold"]
  QualityGates: ["coverage > 80%", "no-critical-bugs"]
```

## Features

| Feature | Description | Guide |
|---------|-------------|-------|
| Sprint Lifecycle | Vision, Plan, Execute, Deliver, Retrospect | AI-native 5-phase cycle |
| Task Breakdown | Atomic, testable, agent-sized tasks | Hours not days per task |
| Agent Routing | Match tasks to optimal agents | Capability + load scoring |
| Autonomy Levels | Full-auto to supervised modes | Balance speed and oversight |
| Quality Gates | Automated checkpoints | Coverage, security, performance |
| Parallel Execution | Swarm-based task processing | Maximize parallelization |
| Sprint Analytics | Velocity, quality, efficiency metrics | Continuous improvement |

## Common Patterns

### Sprint Lifecycle

```
VISION ──> PLAN ──> EXECUTE ──> DELIVER ──> RETROSPECT
   │         │         │           │             │
   ▼         ▼         ▼           ▼             ▼
 Define   Break into  Agents    Ship to      Learn and
 success  agent-ready  work    production    improve
 criteria   tasks    parallel
```

### Vision Definition

```typescript
interface SprintVision {
  objective: string;
  businessValue: string;
  successCriteria: SuccessCriterion[];
  scope: {
    included: string[];
    excluded: string[];
    risks: Risk[];
  };
  qualityGates: QualityGate[];
}

const vision: SprintVision = {
  objective: "Implement user authentication with OAuth2",
  businessValue: "Reduce signup friction by 60%",
  successCriteria: [
    { metric: "OAuth providers", target: 3 },
    { metric: "Auth completion rate", target: "95%" },
    { metric: "Security audit", target: "OWASP compliant" }
  ],
  qualityGates: [
    { type: 'coverage', threshold: 80 },
    { type: 'security-scan', threshold: 'no-critical' }
  ]
};
```

### Task Breakdown

```typescript
interface SprintTask {
  id: string;
  title: string;
  type: 'feature' | 'bugfix' | 'test' | 'docs';
  priority: 'critical' | 'high' | 'medium';
  estimatedTokens: number;
  dependencies: string[];
  suggestedAgent: AgentType;
  acceptanceCriteria: string[];
}

// Layer-based breakdown
const tasks = [
  // Layer 1: Foundation
  { id: 'types', title: 'Define TypeScript interfaces', agent: 'architect' },
  { id: 'schema', title: 'Create DB migrations', depends: ['types'] },

  // Layer 2: Implementation (parallel)
  { id: 'google', title: 'Google OAuth', depends: ['types'] },
  { id: 'github', title: 'GitHub OAuth', depends: ['types'] },

  // Layer 3: Quality
  { id: 'tests', title: 'Integration tests', depends: ['google', 'github'] }
];
```

### Agent Routing

```typescript
type AgentType = 'architect' | 'fullstack' | 'debugger' | 'tester' | 'reviewer';

const routingRules: Record<TaskType, AgentType[]> = {
  feature: ['fullstack', 'frontend', 'backend'],
  bugfix: ['debugger', 'fullstack'],
  test: ['tester'],
  docs: ['docs-manager'],
  research: ['oracle', 'architect']
};

// Scoring algorithm
function calculateFitScore(agent: Agent, task: Task): number {
  let score = 0;
  score += capabilityMatch * 40;      // Core capabilities
  score += specializationMatch * 30;   // Domain expertise
  score += (1 - loadFactor) * 20;      // Availability
  score += hasContext ? 10 : 0;        // Context continuity
  return score;
}
```

### Autonomy Levels

```typescript
const autonomyConfigs = {
  'full-auto': {
    checkpoints: [{ trigger: 'phase-complete', action: 'notify' }],
    approvalRequired: ['production-deploy']
  },
  'semi-auto': {
    checkpoints: [
      { trigger: 'task-complete', action: 'notify' },
      { trigger: 'phase-complete', action: 'review', timeout: 3600 }
    ],
    approvalRequired: ['merge-to-main', 'production-deploy']
  },
  'supervised': {
    checkpoints: [{ trigger: 'task-complete', action: 'review' }],
    approvalRequired: ['all-merges', 'all-deploys']
  }
};
```

### Sprint Metrics

```typescript
interface SprintMetrics {
  velocity: { completed: number; planned: number; ratio: number };
  quality: { bugs: number; coverage: number; score: number };
  efficiency: { totalTokens: number; parallelization: number };
  agents: Map<AgentType, { tasks: number; efficiency: number }>;
}

// Dashboard template
`
SPRINT DASHBOARD: ${name}
────────────────────────────────────
PROGRESS         QUALITY         AGENTS
████████░░ 80%   Coverage: 87%   arch: idle
24/30 tasks      Bugs: 2         dev-1: working
                 Security: OK    tester: queued
`
```

### Retrospective Framework

```markdown
## Sprint Retrospective

### Summary
- Velocity: X/Y tasks (Z%)
- Quality: Coverage %, Bugs introduced
- Efficiency: Tokens used, Parallelization ratio

### What Went Well
1. [Success] - Why it worked - How to replicate

### What Could Improve
1. [Challenge] - Root cause - Proposed solution

### Action Items
| Action | Priority | Owner |
|--------|----------|-------|
| [Action] | High | [Agent] |

### Learnings to Encode
- [Pattern to add to agent prompts]
```

## Best Practices

| Do | Avoid |
|----|-------|
| Define clear success criteria before sprint | Starting without vision and scope |
| Break tasks small enough for single-agent | Tasks with circular dependencies |
| Enable maximum parallelization | Skipping quality gates under pressure |
| Set appropriate autonomy based on risk | Ignoring retrospective insights |
| Track metrics consistently | Over-committing capacity |
| Run retrospectives after every sprint | Context-switching agents unnecessarily |
| Encode learnings into agent prompts | Deploying without automated tests |
| Use quality gates to prevent regressions | Letting blockers sit unaddressed |
| Maintain sprint rhythm for predictability | Skipping the retrospective phase |
| Celebrate wins to build momentum | Forgetting to update documentation |

Overview

This skill orchestrates AI-native sprint management by coordinating autonomous agents, routing tasks, and enforcing continuous delivery quality gates. It turns sprint visions into agent-sized work items, runs parallel execution, and provides metrics for velocity, quality, and efficiency. Use it to run repeatable, measurable sprints that balance autonomy with required oversight.

How this skill works

Define a sprint vision with clear objectives and success criteria, then break work into atomic, agent-executable tasks with dependencies and estimated token budgets. Agents are routed using capability, specialization, load, and context scoring so work is executed in parallel where possible. Autonomy levels control checkpoints and approvals; quality gates enforce coverage, security, and performance thresholds. Dashboards and retrospective templates surface metrics and encode learnings back into agent prompts.

When to use it

  • Running development sprints that include autonomous AI agents and human reviewers
  • Coordinating parallel implementation, testing, and review tasks across agent teams
  • Enforcing automated quality gates before merges and production deploys
  • Balancing speed and oversight with configurable autonomy levels
  • Scaling repeatable feature delivery while tracking velocity and efficiency

Best practices

  • Define explicit vision and success criteria before breaking down tasks
  • Keep tasks small and agent-sized to enable parallel execution
  • Set autonomy based on risk: supervised for high-risk deploys, full-auto for stable flows
  • Use quality gates (coverage, security scans) to block unsafe merges
  • Run retrospectives after each sprint and encode learnings into agent prompts

Example use cases

  • Implement OAuth2 across providers: split work into types, provider integrations, and integration tests, route to architect/fullstack/tester agents
  • Parallelize feature rollout by creating independent layer-based tasks that run concurrently with a centralized delivery pipeline
  • Run a controlled semi-auto release where merges require human approval but agents perform builds and tests
  • Track sprint health with dashboards showing velocity, coverage, agent load, and token efficiency
  • Automate retrospectives to generate action items and update agent behaviors for the next sprint

FAQ

How do autonomy levels affect delivery?

Autonomy levels define checkpoints and approval gates: full-auto minimizes human intervention, semi-auto requires approvals for key steps, and supervised mandates reviews for merges and deploys.

What metrics matter most?

Focus on velocity (completed vs planned), quality (coverage, bugs, security score), and efficiency (tokens used and parallelization ratio).

How are tasks routed to agents?

Tasks are scored by capability match, specialization, current load, and context continuity; the highest-fit agent or agent pool is assigned.