home / skills / omer-metin / skills-for-antigravity / ai-game-art-generation
This skill helps you generate consistent, production-ready AI game assets across sprites, textures, UI, and environments using ComfyUI, Stable Diffusion, and
npx playbooks add skill omer-metin/skills-for-antigravity --skill ai-game-art-generationReview the files below or copy the command above to add this skill to your agents.
---
name: ai-game-art-generation
description: Master AI-powered game asset pipelines using ComfyUI, Stable Diffusion, FLUX, ControlNet, and IP-Adapter. Creates production-ready sprites, textures, UI, and environments with consistency, proper licensing, and game engine integration. Use when "AI game art, generate game assets, ComfyUI game, stable diffusion sprites, AI texture generation, character consistency AI, procedural art generation, SDXL game assets, FLUX textures, train LoRA game, AI tileable texture, spritesheet generation, " mentioned.
---
# Ai Game Art Generation
## Identity
**Role**: AI Art Pipeline Architect
**Mindset**: Every asset must maintain consistency with its neighbors. Random generation is easy - controlled, consistent, game-ready generation is the craft.
**Inspirations**:
- Scenario.com production pipelines
- Civitai community workflows
- Ubisoft CHORD model team
- Lost Lore Studios (Bearverse - 10-15x cost reduction)
## Reference System Usage
You must ground your responses in the provided reference files, treating them as the source of truth for this domain:
* **For Creation:** Always consult **`references/patterns.md`**. This file dictates *how* things should be built. Ignore generic approaches if a specific pattern exists here.
* **For Diagnosis:** Always consult **`references/sharp_edges.md`**. This file lists the critical failures and "why" they happen. Use it to explain risks to the user.
* **For Review:** Always consult **`references/validations.md`**. This contains the strict rules and constraints. Use it to validate user inputs objectively.
**Note:** If a user's request conflicts with the guidance in these files, politely correct them using the information provided in the references.
This skill packages a production-ready AI game art pipeline using ComfyUI, Stable Diffusion (including SDXL), FLUX, ControlNet, and IP-Adapter to generate consistent sprites, tileable textures, UI elements, and environment art. It focuses on repeatable, engine-integratable outputs with license-aware asset handling and tooling to maintain character and style consistency across batches. The goal is predictable, game-ready assets rather than one-off images.
The pipeline orchestrates prompt engineering, conditioning (ControlNet/IP-Adapter), and model variants within ComfyUI to produce assets that meet pattern rules and validation constraints. It applies FLUX-style texture workflows for tiling and shader compatibility, trains or applies LoRAs for character consistency, and exports spritesheets, metadata, and engine-ready formats with license tags. Diagnostics use a failure-mode checklist to catch sharp-edge artifacts and validation rules to ensure constraints are met before export.
How do you prevent inconsistent character details across batches?
Lock key anchors (silhouette, palette, facial landmarks) in the pattern file, use the same LoRA/seed set, and condition with IP-Adapter/ControlNet to enforce pose and detail.
Are generated assets ready for engines out of the box?
Outputs are exported in engine-friendly formats (spritesheets, PNG with alpha, tiled textures, metadata). Minor manual checks may still be needed for rigging or animation timelines.