Stable Diffusion- Applying ControlNet to Character Design - Part 2
Offered By: kasukanra via YouTube
Course Description
Overview
Syllabus
Intro
Download and place ControlNet 1.1 models in proper directory
Segment Anything extension
Install visual studio build tools if you have any errors regarding pycoco tools
Generate baseline reference using traditional merged inpainting model
Using Grounding DINO to create a semi-supervised inpaint mask
Enable ControlNet 1.1 inpaint global harmonious
ControlNet 1.1 inpainting gotcha #1
ControlNet 1.1 gotcha #2
Tuning the inapinting parameters
Analyzing the new tuned outputs
Compositing ControlNet 1.1 inpaint output in photoshop
ControlNet 1.1 inpaint without Grounding DINO
Exploring ControlNet 1.1 instruct pix2pix for targeted variations
Determining the limitations for ip2p
Using segment anything with ip2p
Applying ip2p + Grounding DINO to PNGtuber
Analyzing the tuned PNGtuber results
ControlNet 1.1 Tile model overview
Applying the tile model to the shipbuilder illustration
Showing the thumbnail tile model generation
Introducing the image that will be used with tile model contextual upscaling
Checking Github issue for more information regarding tile model
Contextual upscaling with ControlNet 1.1 tile model
Comparing upscaler methods tile model, vanilla Ultimate SD Upscale, 4x Ultrasharp
Use tile model upscale on the star pupils chibi
Composite upscaled closed mouth expression
Creating the closed eyes expression
Closing thoughts
Taught by
kasukanra
Related Courses
AWS Flash - Generative AI with Diffusion ModelsAmazon Web Services via AWS Skill Builder AWS Flash - Generative AI with Diffusion Models (Japanese)
Amazon Web Services via AWS Skill Builder AWS Flash - Generative AI with Diffusion Models (Simplified Chinese)
Amazon Web Services via AWS Skill Builder AWS Flash - Generative AI with Diffusion Models (Traditional Chinese)
Amazon Web Services via AWS Skill Builder Stable Diffusion Crash Course for Beginners
freeCodeCamp