Content Navigation
1. Introduction
https://silicongamer.com/stable-diffusion-webui-tutorial-series-part-1
After successfully installing Stable Diffusion WebUI (as covered in Part 1), you might find that clicking the “Generate” button produces disappointing results. This guide explains the basic logic behind image generation and how to improve output quality.
2. Models
- Different models specialize in generating specific types of images (e.g., portraits, animals, anime, historical styles).
- The most popular model repository is CivitAI https://civitai.com/
- Used model here,XXMix_9realisticSDXL,https://civitai.com/models/124421
- Installation: Place downloaded model files in: stable-diffusion-webui/models/Stable-diffusion/
3. Prompts
- Positive Prompts: Describe what you want to generate (e.g., “a realistic portrait of a woman with red hair”).
- Negative Prompts: Specify unwanted features (e.g., “blurry, deformed hands”).
4. Parameters
- Includes settings like samplers, ControlNet, etc. (Advanced topics will be covered separately.)
- Beginners can ignore most parameters initially.
5. Reproducing Images
- Download a model and its sample PNG.
- Place the model in the correct folder and select it in the UI.
- In the WebUI:
- In the WebUI
- Click PNG Info and drag the sample PNG into the box
- The UI will display metadata (prompts, settings)
- Click Send to txt2img to load these settings
- Wait 10–30 seconds (depending on your GPU).
6.