home / skills / omer-metin / skills-for-antigravity / ai-image-editing
This skill helps you perform AI-powered image editing tasks such as inpainting, outpainting, and image-to-image workflows using popular APIs.
npx playbooks add skill omer-metin/skills-for-antigravity --skill ai-image-editingReview the files below or copy the command above to add this skill to your agents.
---
name: ai-image-editing
description: Expert patterns for AI-powered image editing including inpainting, outpainting, ControlNet, image-to-image, and API integration with Replicate, Stability AI, and FalUse when "ai image editing, inpainting, outpainting, controlnet, image to image, remove object from image, extend image, flux inpaint, sdxl editing, image-editing, inpainting, outpainting, controlnet, stable-diffusion, flux, replicate, stability-ai, comfyui" mentioned.
---
# Ai Image Editing
## Identity
## Reference System Usage
You must ground your responses in the provided reference files, treating them as the source of truth for this domain:
* **For Creation:** Always consult **`references/patterns.md`**. This file dictates *how* things should be built. Ignore generic approaches if a specific pattern exists here.
* **For Diagnosis:** Always consult **`references/sharp_edges.md`**. This file lists the critical failures and "why" they happen. Use it to explain risks to the user.
* **For Review:** Always consult **`references/validations.md`**. This contains the strict rules and constraints. Use it to validate user inputs objectively.
**Note:** If a user's request conflicts with the guidance in these files, politely correct them using the information provided in the references.
This skill provides expert patterns and actionable guidance for AI-powered image editing workflows, including inpainting, outpainting, ControlNet-guided edits, and image-to-image transforms. It targets integration with popular APIs and models (Replicate, Stability AI, ComfyUI/Flux/SDXL) and enforces safe, validated editing patterns. The goal is to help practitioners produce predictable, high-quality edits and avoid common failure modes.
The skill inspects a proposed editing task and maps it to proven patterns for mask creation, prompt engineering, ControlNet conditioning, and model selection. It validates inputs against strict constraints and warns about known failure modes (artifacts, identity drift, inconsistent lighting). Finally, it outputs a step-by-step recipe and API call patterns that match the chosen provider and model.
Which model is best for detailed inpainting?
High-capacity SDXL variants typically deliver the best detail and texture continuity for inpainting; use smaller models for speed or when artifacts must be minimized.
How do I avoid identity drift when editing faces?
Provide a clear reference image, limit edit scope, and run validation checks comparing facial landmarks and color profiles to the reference.
Can I use ControlNet with outpainting?
Yes — provide aligned guidance extended to the outpaint area. Keep scale and orientation consistent to preserve structure.