YoVDO

NeRF - Representing Scenes as Neural Radiance Fields for View Synthesis

Offered By: Yannic Kilcher via YouTube

Tags

Deep Learning Courses Neural Radiance Fields Courses Positional Encoding Courses

Course Description

Overview

Explore the groundbreaking Neural Radiance Fields (NeRF) technique for view synthesis in this comprehensive video explanation. Dive into the core concepts of NeRF, including its ability to embed entire scenes into neural network weights and achieve state-of-the-art results using sparse input views. Learn about the differential volume rendering procedure, directional dependence, and how NeRF captures fine structural details, reflections, and transparency. Follow along as the video breaks down the training process, radiance field volume rendering, positional encoding, and hierarchical volume sampling. Gain insights into the experimental results and understand how NeRF outperforms prior work in neural rendering and view synthesis.

Syllabus

- Intro & Overview
- View Synthesis Task Description
- The fundamental difference to classic Deep Learning
- NeRF Core Concept
- Training the NeRF from sparse views
- Radiance Field Volume Rendering
- Resulting View Dependence
- Positional Encoding
- Hierarchical Volume Sampling
- Experimental Results
- Comments & Conclusion


Taught by

Yannic Kilcher

Related Courses

Perceiver - General Perception with Iterative Attention
Yannic Kilcher via YouTube
LambdaNetworks- Modeling Long-Range Interactions Without Attention
Yannic Kilcher via YouTube
Attention Is All You Need - Transformer Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube
NeRFs- Neural Radiance Fields - Paper Explained
Aladdin Persson via YouTube
Deep Dive into the Transformer Encoder Architecture
CodeEmporium via YouTube