home / skills / partme-ai / full-stack-skills / stable-diffusion
This skill guides you through Stable Diffusion image generation, including model setup, prompt engineering, parameters, and quality-focused workflows.
npx playbooks add skill partme-ai/full-stack-skills --skill stable-diffusionReview the files below or copy the command above to add this skill to your agents.
---
name: stable-diffusion
description: Provides comprehensive guidance for Stable Diffusion AI image generation including model usage, prompt engineering, parameters, and image generation. Use when the user asks about Stable Diffusion, needs to generate AI images, configure models, or work with Stable Diffusion.
license: Complete terms in LICENSE.txt
---
## When to use this skill
Use this skill whenever the user wants to:
- [待完善:根据具体工具添加使用场景]
## How to use this skill
[待完善:根据具体工具添加使用指南]
## Best Practices
[待完善:根据具体工具添加最佳实践]
## Keywords
[待完善:根据具体工具添加关键词]
This skill provides comprehensive, practical guidance for using Stable Diffusion for AI image generation. It covers model selection, prompt engineering, key parameters, and common workflows so you can produce high-quality images reliably. The content focuses on actionable steps and tips for both beginners and experienced users.
The skill explains how to choose and configure Stable Diffusion models, set sampling methods and schedulers, and tune parameters like guidance scale, steps, and seed. It describes prompt structure, negative prompts, and techniques such as conditioning, inpainting, and image-to-image. It also outlines common tooling and Python snippets for running generation locally or via APIs.
Which model should I choose for photorealism vs stylized art?
Choose specialized checkpoints or fine-tuned models: look for models labeled photorealistic for realism and artist-style models for stylized outputs. Test a few on sample prompts to compare.
How do I reduce unwanted artifacts or text in outputs?
Use negative prompts targeting artifacts, increase sampling steps slightly, lower image size for experimentation, and try different samplers or model checkpoints.