home / skills / openclaw / skills / volcengine-ai-image-generation
This skill generates and refines Volcengine AI images by structuring prompts, setting parameters, and returning usable image links.
npx playbooks add skill openclaw/skills --skill volcengine-ai-image-generationReview the files below or copy the command above to add this skill to your agents.
---
name: volcengine-ai-image-generation
description: Image generation workflow on Volcengine AI services. Use when users need text-to-image, style variants, prompt refinement, or deterministic image generation parameters and troubleshooting.
---
# volcengine-ai-image-generation
Generate and iterate images with clear prompt structure and parameter controls.
## Execution Checklist
1. Confirm model/endpoint and output constraints (size, count, style).
2. Normalize prompt into subject, style, scene, lighting, camera terms.
3. Set generation parameters and run request.
4. Return image links/files with prompt and params.
## Prompt Structure
- Subject
- Composition
- Style
- Lighting
- Quality constraints
## References
- `references/sources.md`
This skill provides a structured workflow for generating and iterating images using Volcengine AI services. It focuses on clear prompt construction, deterministic parameter control, and delivering image outputs with reproducible settings. Use it to create text-to-image results, style variants, and refined prompts for predictable outcomes.
The skill normalizes an input prompt into discrete components (subject, composition, style, lighting, quality) and validates model/endpoint constraints such as output size and count. It sets deterministic generation parameters, issues requests to the chosen Volcengine endpoint, and returns image links or files alongside the full prompt and parameters used. It also supports iterative refinements and basic troubleshooting guidance when results diverge from expectations.
What prompt structure should I use?
Split prompts into subject, composition, style, lighting, and quality constraints to make each element explicit and adjustable.
How do I ensure reproducible outputs?
Set and record a fixed seed and deterministic generation parameters (steps, sampler, guidance) and confirm endpoint supports deterministic runs.