Using Stable Diffusion and Textual Inversion to Create Stylized Character Concept Art
Offered By: kasukanra via YouTube
Course Description
Overview
Syllabus
I'm not using vanilla standard diffusion 1.4 checkpoint for my textual inversion training model or img2img. The model I'm using is a 0.5/0.5 weighted sum split between standard diffusion 1.4 and waifu diffusion 1.3 not the final 1.3 model. I go over this in the video at timestamp .
- Intro/Preview of character artwork created with the help of stable diffusion
- Some context about textual inversion
- Short rundown of the image dataset I used as input to textual inversion training
- Continuation of textual inversion process
- Explanation of checkpoint merging
- Checking training process
- Img2img demo on character sketch
- Explanation of prompts
- How to use loopback and why I use it
- Sample output of loopback generation
- Short narrated real-time demo of painting over loopback images
- Demonstration of spot healing brush to correct irregularities
- Painting the ear
- Start of the image finalization
- Sharpening the image Filter - Other - High Pass - Pixel value lowish 1.0 - 1.5 - Put it on hard light blending mode
- Adding bloom Filter - Blur - Gaussian Blur Pass - Put it on screen blending mode
- Camera Raw Filter
- Using 3rd party addon AKVIS artwork to add a bit of painterly effect. Use sparingly.
Taught by
kasukanra
Related Courses
Concept Art for Video GamesMichigan State University via Coursera The Game Design and AI Master Class Beginner to Expert
Udemy The Ultimate Guide to Digitally Painting Everything
Udemy Digitally Painting Light and Color: Amateur to Master
Udemy Photorealistic Digital Painting From Beginner To Advanced.
Udemy