Stable Diffusion- Applying ControlNet to Character Design - Part 2
Offered By: kasukanra via YouTube
Course Description
Overview
Syllabus
Intro
Download and place ControlNet 1.1 models in proper directory
Segment Anything extension
Install visual studio build tools if you have any errors regarding pycoco tools
Generate baseline reference using traditional merged inpainting model
Using Grounding DINO to create a semi-supervised inpaint mask
Enable ControlNet 1.1 inpaint global harmonious
ControlNet 1.1 inpainting gotcha #1
ControlNet 1.1 gotcha #2
Tuning the inapinting parameters
Analyzing the new tuned outputs
Compositing ControlNet 1.1 inpaint output in photoshop
ControlNet 1.1 inpaint without Grounding DINO
Exploring ControlNet 1.1 instruct pix2pix for targeted variations
Determining the limitations for ip2p
Using segment anything with ip2p
Applying ip2p + Grounding DINO to PNGtuber
Analyzing the tuned PNGtuber results
ControlNet 1.1 Tile model overview
Applying the tile model to the shipbuilder illustration
Showing the thumbnail tile model generation
Introducing the image that will be used with tile model contextual upscaling
Checking Github issue for more information regarding tile model
Contextual upscaling with ControlNet 1.1 tile model
Comparing upscaler methods tile model, vanilla Ultimate SD Upscale, 4x Ultrasharp
Use tile model upscale on the star pupils chibi
Composite upscaled closed mouth expression
Creating the closed eyes expression
Closing thoughts
Taught by
kasukanra
Related Courses
Think. Create. CodeUniversity of Adelaide via edX Digital Arts
iversity Max MSP Programming Course: Structuring Interactive Software for Digital Arts
Stanford University via Kadenze Marey : l'Art et la Science du mouvement
University of Burgundy via France Université Numerique Creación y retoque de imágenes con software libre
Universidad de Malaga via Miríadax