home / skills / yeachan-heo / oh-my-claudecode / pipeline

pipeline skill

/skills/pipeline

This skill orchestrates sequential and branching AI agent pipelines, passing structured context between stages to automate complex code tasks.

npx playbooks add skill yeachan-heo/oh-my-claudecode --skill pipeline

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
364 B
---
name: pipeline
description: "[DEPRECATED] Use /autopilot instead"
deprecated: true
deprecationMessage: "/pipeline has been removed in #1131. Use /autopilot instead."
---

# Pipeline (Deprecated)

This mode has been deprecated and removed in issue #1131 (pipeline unification).

Use `/autopilot` instead for full autonomous execution from idea to working code.

Overview

This skill chains multiple agents into sequential, branching, or parallel workflows so the output of one agent becomes the input for the next. It provides presets for common engineering flows and a compact custom syntax to define pipelines. Use it to orchestrate multi-agent tasks like code review, implementation, debugging, research, and refactoring.

How this skill works

The orchestrator parses pipeline definitions, initializes a persistent state file, and spawns agents for each stage. Each agent receives a structured pipeline_context that includes the original task, previous stage outputs, and routing hints; outputs are merged or routed based on branching rules. The system supports retries, fallbacks, parallel execution, merging strategies, and verification checks before completion.

When to use it

  • Execute multi-step engineering workflows that require specialized agent roles (explore, architect, executor, critic).
  • Run parallel investigations (external docs + code search) and then synthesize results.
  • Automate conditional routing when a task needs different handling based on complexity or findings.
  • Implement repeat-until-success patterns such as test-driven loops or iterative fixes.
  • Use presets for standard tasks like reviews, implementations, debugging, security audits, and refactors.

Best practices

  • Start with built-in presets (review, implement, debug, research, refactor, security) before crafting custom chains.
  • Keep each stage focused: one clear responsibility per agent to improve traceability and verification.
  • Match model tiers to task complexity to control cost—reserve high-tier models for complex analysis.
  • Use parallel stages for independent work and choose a merge strategy (concat, summarize, vote) explicitly.
  • Persist and inspect pipeline state (.omc/pipeline-state.json) for debugging, and delete state files on completion.

Example use cases

  • Run /pipeline review "add rate limiting" to find relevant code, analyze design, critique, then implement.
  • Run /pipeline debug "login fails with OAuth" to locate errors, identify root cause, and apply fixes.
  • Run /pipeline research "implement GraphQL subscriptions" to combine external docs with codebase findings, then synthesize and document recommendations.
  • Custom chain: /pipeline explore -> architect -> executor -> tdd-guide "refactor auth module" to implement and add tests.
  • Conditional pipeline: route heavy refactors to high-tier executor and simple fixes directly to a lightweight executor.

FAQ

How does data pass between agents?

Agents receive a structured pipeline_context object that includes original_task, previous_stages outputs, current_stage, and next_stage.

What happens on agent failure?

Configurable policies include retry (up to 3 times), skip to next stage, abort the pipeline, or fallback to an alternative agent.