home / skills / omer-metin / skills-for-antigravity / ai-image-editing

ai-image-editing skill

/skills/ai-image-editing

This skill helps you perform AI-powered image editing tasks such as inpainting, outpainting, and image-to-image workflows using popular APIs.

npx playbooks add skill omer-metin/skills-for-antigravity --skill ai-image-editing

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
1.3 KB
---
name: ai-image-editing
description: Expert patterns for AI-powered image editing including inpainting, outpainting, ControlNet, image-to-image, and API integration with Replicate, Stability AI, and FalUse when "ai image editing, inpainting, outpainting, controlnet, image to image, remove object from image, extend image, flux inpaint, sdxl editing, image-editing, inpainting, outpainting, controlnet, stable-diffusion, flux, replicate, stability-ai, comfyui" mentioned. 
---

# Ai Image Editing

## Identity



## Reference System Usage

You must ground your responses in the provided reference files, treating them as the source of truth for this domain:

* **For Creation:** Always consult **`references/patterns.md`**. This file dictates *how* things should be built. Ignore generic approaches if a specific pattern exists here.
* **For Diagnosis:** Always consult **`references/sharp_edges.md`**. This file lists the critical failures and "why" they happen. Use it to explain risks to the user.
* **For Review:** Always consult **`references/validations.md`**. This contains the strict rules and constraints. Use it to validate user inputs objectively.

**Note:** If a user's request conflicts with the guidance in these files, politely correct them using the information provided in the references.

Overview

This skill provides expert patterns and actionable guidance for AI-powered image editing workflows, including inpainting, outpainting, ControlNet-guided edits, and image-to-image transforms. It targets integration with popular APIs and models (Replicate, Stability AI, ComfyUI/Flux/SDXL) and enforces safe, validated editing patterns. The goal is to help practitioners produce predictable, high-quality edits and avoid common failure modes.

How this skill works

The skill inspects a proposed editing task and maps it to proven patterns for mask creation, prompt engineering, ControlNet conditioning, and model selection. It validates inputs against strict constraints and warns about known failure modes (artifacts, identity drift, inconsistent lighting). Finally, it outputs a step-by-step recipe and API call patterns that match the chosen provider and model.

When to use it

  • Remove unwanted objects while preserving surrounding texture and lighting
  • Extend images (outpainting) to match composition and perspective
  • Convert sketches or rough drafts into photoreal or stylized images via image-to-image
  • Apply precise structural control using ControlNet (pose, edges, scribbles)
  • Integrate image editing into pipelines using Replicate, Stability AI, or ComfyUI/Flux

Best practices

  • Always provide a clean, tightly-fit mask for inpainting; avoid overly large masked areas in a single pass
  • Use multi-step edits: coarse structure first, then refine details and color consistency
  • Pin a reference for identity-critical edits and validate face fidelity when altering people
  • Condition ControlNet with aligned guidance images (same scale and orientation) to reduce artifacts
  • Validate output against objective checks (resolution, aspect ratio, color shift, and artifact thresholds) before deployment

Example use cases

  • Remove a powerline from a landscape photo while preserving sky gradients and edge continuity
  • Outpaint a product shot to create additional negative space for marketing layouts
  • Use ControlNet with a sketch to generate a concept art composition that matches a pose reference
  • Iteratively refine a portrait with SDXL: fix silhouette, then restore skin texture and color balance
  • Automate an image-editing API pipeline that runs mask generation, model selection, and quality validation

FAQ

Which model is best for detailed inpainting?

High-capacity SDXL variants typically deliver the best detail and texture continuity for inpainting; use smaller models for speed or when artifacts must be minimized.

How do I avoid identity drift when editing faces?

Provide a clear reference image, limit edit scope, and run validation checks comparing facial landmarks and color profiles to the reference.

Can I use ControlNet with outpainting?

Yes — provide aligned guidance extended to the outpaint area. Keep scale and orientation consistent to preserve structure.