YoVDO

Stable Diffusion- Applying ControlNet to Character Design - Part 2

Offered By: kasukanra via YouTube

Tags

Stable Diffusion Courses Microsoft Visual Studio Courses Digital Art Courses ControlNet Courses Character Design Courses Concept Art Courses

Course Description

Overview

Dive into an in-depth 57-minute tutorial on applying ControlNet 1.1 to character design, focusing on Stable Diffusion techniques. Learn how to set up ControlNet 1.1 models, use the Segment Anything extension, and troubleshoot common issues. Explore advanced inpainting techniques using Grounding DINO and global harmonious inpainting. Discover the potential of instruct pix2pix for targeted variations and its application to PNGtuber creation. Gain insights into the ControlNet 1.1 Tile model for contextual upscaling and compare various upscaling methods. Master the art of compositing and expression creation for character designs. Perfect for digital artists, concept artists, and AI art enthusiasts looking to enhance their character design workflow using cutting-edge machine learning tools.

Syllabus

Intro
Download and place ControlNet 1.1 models in proper directory
Segment Anything extension
Install visual studio build tools if you have any errors regarding pycoco tools
Generate baseline reference using traditional merged inpainting model
Using Grounding DINO to create a semi-supervised inpaint mask
Enable ControlNet 1.1 inpaint global harmonious
ControlNet 1.1 inpainting gotcha #1
ControlNet 1.1 gotcha #2
Tuning the inapinting parameters
Analyzing the new tuned outputs
Compositing ControlNet 1.1 inpaint output in photoshop
ControlNet 1.1 inpaint without Grounding DINO
Exploring ControlNet 1.1 instruct pix2pix for targeted variations
Determining the limitations for ip2p
Using segment anything with ip2p
Applying ip2p + Grounding DINO to PNGtuber
Analyzing the tuned PNGtuber results
ControlNet 1.1 Tile model overview
Applying the tile model to the shipbuilder illustration
Showing the thumbnail tile model generation
Introducing the image that will be used with tile model contextual upscaling
Checking Github issue for more information regarding tile model
Contextual upscaling with ControlNet 1.1 tile model
Comparing upscaler methods tile model, vanilla Ultimate SD Upscale, 4x Ultrasharp
Use tile model upscale on the star pupils chibi
Composite upscaled closed mouth expression
Creating the closed eyes expression
Closing thoughts


Taught by

kasukanra

Related Courses

AWS Flash - Generative AI with Diffusion Models
Amazon Web Services via AWS Skill Builder
AWS Flash - Generative AI with Diffusion Models (Japanese)
Amazon Web Services via AWS Skill Builder
AWS Flash - Generative AI with Diffusion Models (Simplified Chinese)
Amazon Web Services via AWS Skill Builder
AWS Flash - Generative AI with Diffusion Models (Traditional Chinese)
Amazon Web Services via AWS Skill Builder
Stable Diffusion Crash Course for Beginners
freeCodeCamp