YoVDO

NeRF - Representing Scenes as Neural Radiance Fields for View Synthesis

Offered By: Yannic Kilcher via YouTube

Tags

Deep Learning Courses Neural Radiance Fields Courses Positional Encoding Courses

Course Description

Overview

Explore the groundbreaking Neural Radiance Fields (NeRF) technique for view synthesis in this comprehensive video explanation. Dive into the core concepts of NeRF, including its ability to embed entire scenes into neural network weights and achieve state-of-the-art results using sparse input views. Learn about the differential volume rendering procedure, directional dependence, and how NeRF captures fine structural details, reflections, and transparency. Follow along as the video breaks down the training process, radiance field volume rendering, positional encoding, and hierarchical volume sampling. Gain insights into the experimental results and understand how NeRF outperforms prior work in neural rendering and view synthesis.

Syllabus

- Intro & Overview
- View Synthesis Task Description
- The fundamental difference to classic Deep Learning
- NeRF Core Concept
- Training the NeRF from sparse views
- Radiance Field Volume Rendering
- Resulting View Dependence
- Positional Encoding
- Hierarchical Volume Sampling
- Experimental Results
- Comments & Conclusion


Taught by

Yannic Kilcher

Related Courses

NeRFs- Neural Radiance Fields - Paper Explained
Aladdin Persson via YouTube
Neural Radiance Fields for View Synthesis
Andreas Geiger via YouTube
Learning 3D Reconstruction in Function Space - Long Version
Andreas Geiger via YouTube
Nvidia Instant-NGP - Create Your Own NeRF Scene From a Video or Images - Hands-On Tutorial
Prodramp via YouTube
Turning the Internet Into 3D - 3D Content with NeRF and Gaussian Splatting
Linux Foundation via YouTube