home / skills / partme-ai / full-stack-skills / stable-diffusion

stable-diffusion skill

/skills/stable-diffusion

This skill guides you through Stable Diffusion image generation, including model setup, prompt engineering, parameters, and quality-focused workflows.

npx playbooks add skill partme-ai/full-stack-skills --skill stable-diffusion

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
693 B
---
name: stable-diffusion
description: Provides comprehensive guidance for Stable Diffusion AI image generation including model usage, prompt engineering, parameters, and image generation. Use when the user asks about Stable Diffusion, needs to generate AI images, configure models, or work with Stable Diffusion.
license: Complete terms in LICENSE.txt
---

## When to use this skill

Use this skill whenever the user wants to:
- [待完善:根据具体工具添加使用场景]

## How to use this skill

[待完善:根据具体工具添加使用指南]

## Best Practices

[待完善:根据具体工具添加最佳实践]

## Keywords

[待完善:根据具体工具添加关键词]

Overview

This skill provides comprehensive, practical guidance for using Stable Diffusion for AI image generation. It covers model selection, prompt engineering, key parameters, and common workflows so you can produce high-quality images reliably. The content focuses on actionable steps and tips for both beginners and experienced users.

How this skill works

The skill explains how to choose and configure Stable Diffusion models, set sampling methods and schedulers, and tune parameters like guidance scale, steps, and seed. It describes prompt structure, negative prompts, and techniques such as conditioning, inpainting, and image-to-image. It also outlines common tooling and Python snippets for running generation locally or via APIs.

When to use it

  • You want to generate concept art, character designs, or photorealistic images with Stable Diffusion.
  • You need to optimize prompts or parameters to reduce artifacts or achieve a specific style.
  • You are configuring a local or cloud-based Stable Diffusion instance and need guidance on models and resources.
  • You want to perform image-to-image edits, inpainting, or upscale generated images.
  • You need reproducible results via seeding, checkpoint management, or hyperparameter tracking.

Best practices

  • Start with a clear, concise prompt and iterate by adding concrete visual details and stylistic references.
  • Use negative prompts to suppress unwanted elements and reduce common artifacts.
  • Balance guidance scale and sampling steps: higher guidance helps fidelity, more steps refine detail but increase time.
  • Seed your runs for reproducibility and document model checkpoints and hyperparameters.
  • Use smaller test images or fewer steps during experimentation, then upscale or increase steps for final renders.

Example use cases

  • Create multiple character concept variations by keeping core descriptors and changing style tokens.
  • Perform a guided edit via image-to-image: retain composition while changing color or mood.
  • Generate photorealistic product mockups using camera settings and lighting descriptors in the prompt.
  • Inpaint damaged or incomplete areas by providing masks and focused prompts for the masked region.
  • Batch-generate asset variations for games or marketing with seeds and parameter templates for consistency.

FAQ

Which model should I choose for photorealism vs stylized art?

Choose specialized checkpoints or fine-tuned models: look for models labeled photorealistic for realism and artist-style models for stylized outputs. Test a few on sample prompts to compare.

How do I reduce unwanted artifacts or text in outputs?

Use negative prompts targeting artifacts, increase sampling steps slightly, lower image size for experimentation, and try different samplers or model checkpoints.