Using Stable Diffusion and Textual Inversion to Create Stylized Character Concept Art
Offered By: kasukanra via YouTube
Course Description
Overview
Syllabus
I'm not using vanilla standard diffusion 1.4 checkpoint for my textual inversion training model or img2img. The model I'm using is a 0.5/0.5 weighted sum split between standard diffusion 1.4 and waifu diffusion 1.3 not the final 1.3 model. I go over this in the video at timestamp .
- Intro/Preview of character artwork created with the help of stable diffusion
- Some context about textual inversion
- Short rundown of the image dataset I used as input to textual inversion training
- Continuation of textual inversion process
- Explanation of checkpoint merging
- Checking training process
- Img2img demo on character sketch
- Explanation of prompts
- How to use loopback and why I use it
- Sample output of loopback generation
- Short narrated real-time demo of painting over loopback images
- Demonstration of spot healing brush to correct irregularities
- Painting the ear
- Start of the image finalization
- Sharpening the image Filter - Other - High Pass - Pixel value lowish 1.0 - 1.5 - Put it on hard light blending mode
- Adding bloom Filter - Blur - Gaussian Blur Pass - Put it on screen blending mode
- Camera Raw Filter
- Using 3rd party addon AKVIS artwork to add a bit of painterly effect. Use sparingly.
Taught by
kasukanra
Related Courses
The New AI Model Licenses Have a Legal Loophole - OpenRAIL-M of BLOOM, Stable Diffusion, etc.Yannic Kilcher via YouTube Stable Diffusion - Master AI Art: Installation, Prompts, Txt2img-Img2img, Out-Inpaint and Resize Tutorial
ChamferZone via YouTube Get Started With Stable Diffusion - Code, HF Spaces, Diffusers Notebooks
Aleksa Gordić - The AI Epiphany via YouTube Stable Diffusion Animation Tutorial - Deforum All Settings Explained - Make Your Own AI Video
Sebastian Kamph via YouTube Stable Diffusion - What, Why, How?
Edan Meyer via YouTube