home / skills / catlog22 / claude-code-workflow / lite-skill-generator

lite-skill-generator skill

/.claude/skills/lite-skill-generator

This skill orchestrates JSON-driven multi-agent workflows through intelligent CLI orchestration, enabling context-first task execution and automated workflow

npx playbooks add skill catlog22/claude-code-workflow --skill lite-skill-generator

Review the files below or copy the command above to add this skill to your agents.

Files (3)
SKILL.md
21.3 KB
---
name: {{SKILL_NAME}}
description: {{SKILL_DESCRIPTION}}
allowed-tools: {{ALLOWED_TOOLS}}
---

# {{SKILL_TITLE}}

{{SKILL_DESCRIPTION}}

## Architecture

```
┌─────────────────────────────────────────────────┐
│               {{SKILL_TITLE}}                    │
│                                                 │
│  Input → {{PHASE_1}} → {{PHASE_2}} → Output    │
└─────────────────────────────────────────────────┘
```

## Execution Flow

```javascript
async function {{SKILL_FUNCTION}}(input) {
  // Phase 1: {{PHASE_1}}
  const prepared = await phase1(input);

  // Phase 2: {{PHASE_2}}
  const result = await phase2(prepared);

  return result;
}
```

### Phase 1: {{PHASE_1}}

```javascript
async function phase1(input) {
  // TODO: Implement {{PHASE_1_LOWER}} logic
  return output;
}
```

### Phase 2: {{PHASE_2}}

```javascript
async function phase2(input) {
  // TODO: Implement {{PHASE_2_LOWER}} logic
  return output;
}
```

## Usage

```bash
/skill:{{SKILL_NAME}} "input description"
```

## Examples

**Basic Usage**:
```
User: "{{EXAMPLE_INPUT}}"
{{SKILL_NAME}}:
  → Phase 1: {{PHASE_1_ACTION}}
  → Phase 2: {{PHASE_2_ACTION}}
  → Output: {{EXAMPLE_OUTPUT}}
```

Overview

This skill is a JSON-driven multi-agent development framework that coordinates intelligent CLI orchestration across models like Gemini, Qwen, and Codex. It uses a context-first architecture to manage state and prompt construction, enabling repeatable, auditable automated workflows. The tool focuses on task orchestration and developer ergonomics for building complex multi-step agent behaviors.

How this skill works

The skill parses a JSON specification describing agents, tasks, and data flows, then executes a two-phase pipeline: preparation and execution. Phase 1 prepares context, prompts, and inputs; Phase 2 runs agent interactions and aggregates outputs. The framework exposes a CLI for iterative runs and supports intelligent model selection and chaining.

When to use it

  • Automating complex multi-step workflows that require multiple model calls
  • Prototyping multi-agent orchestration or tool-use scenarios
  • Standardizing prompt and context management across projects
  • Running reproducible experiments with different LLM backends
  • Coordinating CLI-driven developer tools and task runners

Best practices

  • Design JSON specs with clear input/output contract for each agent to simplify chaining
  • Keep context slices small and relevant; avoid bloating prompts with unrelated data
  • Version your JSON workflows and include metadata for reproducibility
  • Use the CLI for iterative testing before embedding workflows in production pipelines
  • Log intermediate outputs and decisions for easier debugging and audit trails

Example use cases

  • Automated code generation: orchestrate planning, generation, and review agents across Codex/Gemini
  • Data extraction pipeline: prepare documents, run extraction agents, and normalize results
  • Multi-agent research experiments: compare model behaviors under identical contexts and prompts
  • Dev tooling: wrap common developer tasks (linting, refactor suggestions, changelog drafts) behind reproducible workflows
  • Customer support automation: route and escalate user issues through specialized agent chains

FAQ

Which LLMS are supported?

The framework targets multiple backends and is designed to work with Gemini, Qwen, Claude, and Codex-style models via pluggable connectors.

How do I debug a failing workflow?

Use the CLI with verbose logging to inspect Phase 1 context preparation and Phase 2 outputs. Enable intermediate output logging in the JSON spec to capture agent exchanges.

Can I run the framework locally?

Yes. The tool is Python-based and can run locally. Configure connectors for the models you intend to use and run via the provided CLI.