home / skills / catlog22 / claude-code-workflow / lite-skill-generator
/.claude/skills/lite-skill-generator
This skill orchestrates JSON-driven multi-agent workflows through intelligent CLI orchestration, enabling context-first task execution and automated workflow
npx playbooks add skill catlog22/claude-code-workflow --skill lite-skill-generatorReview the files below or copy the command above to add this skill to your agents.
---
name: {{SKILL_NAME}}
description: {{SKILL_DESCRIPTION}}
allowed-tools: {{ALLOWED_TOOLS}}
---
# {{SKILL_TITLE}}
{{SKILL_DESCRIPTION}}
## Architecture
```
┌─────────────────────────────────────────────────┐
│ {{SKILL_TITLE}} │
│ │
│ Input → {{PHASE_1}} → {{PHASE_2}} → Output │
└─────────────────────────────────────────────────┘
```
## Execution Flow
```javascript
async function {{SKILL_FUNCTION}}(input) {
// Phase 1: {{PHASE_1}}
const prepared = await phase1(input);
// Phase 2: {{PHASE_2}}
const result = await phase2(prepared);
return result;
}
```
### Phase 1: {{PHASE_1}}
```javascript
async function phase1(input) {
// TODO: Implement {{PHASE_1_LOWER}} logic
return output;
}
```
### Phase 2: {{PHASE_2}}
```javascript
async function phase2(input) {
// TODO: Implement {{PHASE_2_LOWER}} logic
return output;
}
```
## Usage
```bash
/skill:{{SKILL_NAME}} "input description"
```
## Examples
**Basic Usage**:
```
User: "{{EXAMPLE_INPUT}}"
{{SKILL_NAME}}:
→ Phase 1: {{PHASE_1_ACTION}}
→ Phase 2: {{PHASE_2_ACTION}}
→ Output: {{EXAMPLE_OUTPUT}}
```
This skill is a JSON-driven multi-agent development framework that coordinates intelligent CLI orchestration across models like Gemini, Qwen, and Codex. It uses a context-first architecture to manage state and prompt construction, enabling repeatable, auditable automated workflows. The tool focuses on task orchestration and developer ergonomics for building complex multi-step agent behaviors.
The skill parses a JSON specification describing agents, tasks, and data flows, then executes a two-phase pipeline: preparation and execution. Phase 1 prepares context, prompts, and inputs; Phase 2 runs agent interactions and aggregates outputs. The framework exposes a CLI for iterative runs and supports intelligent model selection and chaining.
Which LLMS are supported?
The framework targets multiple backends and is designed to work with Gemini, Qwen, Claude, and Codex-style models via pluggable connectors.
How do I debug a failing workflow?
Use the CLI with verbose logging to inspect Phase 1 context preparation and Phase 2 outputs. Enable intermediate output logging in the JSON spec to capture agent exchanges.
Can I run the framework locally?
Yes. The tool is Python-based and can run locally. Configure connectors for the models you intend to use and run via the provided CLI.