Stable Diffusion: Worldbuilding with 3D and AI Using MVDream - Part 1
Offered By: kasukanra via YouTube
Course Description
Overview
Syllabus
Intro
MVDream: what is it/what does it solve?
Dataset Overview
Camera settings explanation
Multiview perspective
Multiview 2D code dive
Camera embedding
Camera utils
Setting up MVDream 2D environment
Trying out the Gradio server
Start of MVDream-threestudio
Setting up Docker environment for MVDream-threestudio
Explaining why the gradio server for 3D is not usable
Generating NeRFs through CLI
Exporting meshes
Evaluating MVDream mesh fidelity
Second stage refinement and why I don't recommend it
Redesign from refinement = unusable
Showing some other NeRF to 3D mesh objects
Rendering out a 3D object
Using 3D renders as ControlNet guides
Worldbuilding overview context
Potential room designs
Potential chair designs
Generating albedo map texture in ComfyUI
Using Adobe 3D Sampler to convert albedo into PBR textures
Quick setup of converted PBR textures
Using same process to generate metal textures
Quick overview of using Cube by CSM to convert a picture to a mesh
Checking refined mesh from Cube
Closing thoughts
Taught by
kasukanra
Related Courses
ComfyUI Master Tutorial - Stable Diffusion XL - Install on PC, Google Colab and RunPodSoftware Engineering Courses - SE Courses via YouTube Stable Diffusion- Training SDXL 1.0 - Finetune, LoRA, D-Adaptation, Prodigy
kasukanra via YouTube ComfyUI and Animate Diff Guide - Creating AI Animations
Prompt Muse via YouTube ComfyUI Starter Guide - How to Use It and Join OpenArt Contest
Olivio Sarikas via YouTube Improving Stable Diffusion Images with FreeU - Optimizing SDXL, LCM, and Turbo Models
kasukanra via YouTube