Stable Diffusion WebUI Tutorial Series Part 2 – ​Reproducing Sample Images​

Silicon Gamer

10/06/2024

updated 15/05/2025

1. Introduction

https://silicongamer.com/stable-diffusion-webui-tutorial-series-part-1

After successfully installing Stable Diffusion WebUI (as covered in Part 1), you might find that clicking the “Generate” button produces disappointing results. This guide explains the basic logic behind image generation and how to improve output quality.

 

2. Models

  1. Different models specialize in generating specific types of images (e.g., portraits, animals, anime, historical styles).
  2. The most popular model repository is ​CivitAI​ https://civitai.com/
  3. Used model here,XXMix_9realisticSDXL,https://civitai.com/models/124421
  4. ​Installation: Place downloaded model files in: stable-diffusion-webui/models/Stable-diffusion/

 

3. Prompts

  • Positive Prompts: Describe what you want to generate (e.g., “a realistic portrait of a woman with red hair”).
  • Negative Prompts: Specify unwanted features (e.g., “blurry, deformed hands”).

4. Parameters

  1. Includes settings like ​samplers, ​ControlNet, etc. (Advanced topics will be covered separately.)
  2. Beginners can ignore most parameters initially.

 

5. ​Reproducing Images

  1. Download a model and its sample PNG.
  2. Place the model in the correct folder and select it in the UI.
  3. In the WebUI:
  4. In the WebUI
    1. Click ​PNG Info​ and drag the sample PNG into the box
    2. The UI will display metadata (prompts, settings)
  5. Click ​Send to txt2img​ to load these settings
  6.  Wait 10–30 seconds (depending on your GPU).

6.

 

发表评论