YoVDO

Stable Diffusion 3 2B Medium Training with Kohya and SimpleTuner - Full Finetune and LoRA

Offered By: kasukanra via YouTube

Tags

Stable Diffusion Courses Machine Learning Courses Neural Networks Courses Hyperparameter Optimization Courses LoRA (Low-Rank Adaptation) Courses Image Generation Courses Fine-Tuning Courses ComfyUI Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Dive into an extensive 80-minute tutorial on training Stable Diffusion 3 2B Medium using kohya and SimpleTuner for full finetune and LoRA. Follow along as the process of art style training is documented, including experiments, mistakes, and analysis of results. Learn about environment setup, parameter configuration, and various training approaches. Explore topics such as SDPA, multiresolution noise, timesteps, and prodigy settings. Gain insights into troubleshooting dependency issues, running workflows, and testing models. Compare different learning rates, analyze results using Weights & Biases, and understand the differences between finetuning and LoRA. Benefit from practical tools, theoretical discussions, and real-world examples to enhance your understanding of SD3 training for art styles, concepts, and subjects.

Syllabus

Introduction
List of SD3 training repositories
Method of approach
kohya sd-scripts environment setup
.toml file setup
SDPA
Multiresolution noise
Timesteps
.toml miscellaneous
Creating the meta_cap.json
sd-scripts sd3 parameters
sd3 pretrained model path
kohya sd3 readme
sd3 sampler settings
sd3 SDPA
Prodigy settings
Dependency issues
Actually running the training
How to run sd3 workflow/test model
kohya sd3 commit hash
Now what?
SD3 AdamW8Bit
wandb proof
Is it over?
Hindsight training appendix
Upper bound of sd3 LR 1.5e-3 for kohya exploding gradient
1.5e-4
SimpleTuner quickstart
SimpleTuner environment setup
Setting up CLI logins
SD3 environment overview
Dataset settings overview
Dataset settings hands-on
multidatabackend.json
SimpleTuner documentation
sdxl_env.sh
Model name
Remaining settings
train_sdxl.sh
Diffusers vs. Checkpoints
Symlinking models
ComfyUI UNET loader
Initial explorations overfitting?
Environment art overfitting?
Character Art Overfitting evaluation
Trying short prompts
ODE samplers
Testing other prompts
How to generate qualitative grids
Generating grids through API workflow
8e-6
Analyzing wandb
Higher LR 1.5e-5
Ablation study #1
Ablation study #2
Ablation study #3
SimpleTuner LoRA setup
Adding lora_rank/lora_alpha to accelerate launch
LoRA failed qualitative grids LoRA rank/alpha = 16
Exploding gradient LR = 1.5e-3
LR = 4e-4 #1
LR = 4e-4 #2
LR = 6.5e-4
Finetune vs. LoRA #1
Finetune vs. LoRA #2
Finetune vs. LoRA #2
Finetune vs. LoRA environment
Conclusion


Taught by

kasukanra

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn
Statistical Learning with R
Stanford University via edX
Machine Learning 1—Supervised Learning
Brown University via Udacity
Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX