Neural Radiance Fields for View Synthesis
Offered By: Andreas Geiger via YouTube
Course Description
Overview
Syllabus
Intro
The problem of novel view interpolation
GB-alpha volume rendering for view synthesis
eural networks as a continuous shape representation
Neural network replaces large N-d array
enerate views with traditional volume rendering
igma parametrization for continuous opacity
Two pass rendering: coarse
Two pass rendering: fine
Viewing directions as input
Volume rendering is trivially differentiable
ptimize with gradient descent on rendering loss
HeRF encodes convincing view-dependent effects using directional dependence
NeRF encodes detailed scene geometry
Going forward
Fourier Features Let Networks Learn
Input Mapping
Key Points
sing kernel regression to approximate deep networks
TK: modeling deep network as kernel regression
Sinusoidal mapping results in a composed stationary NTK
Resulting composed NTK is stationary
No-mapping NTK clearly not stationary
Toy example of stationarity in practice
Modifying mapping manipulates kernel spectrum
ernel spectrum has dramatic effect on convergence and generalization
requency sampling distribution bandwidth matter more than shape
Mapping Code
2D Images
3D Shape
Indirect supervision tasks Ground Truth
Taught by
Andreas Geiger
Related Courses
NeRF - Representing Scenes as Neural Radiance Fields for View SynthesisYannic Kilcher via YouTube NeRFs- Neural Radiance Fields - Paper Explained
Aladdin Persson via YouTube Learning 3D Reconstruction in Function Space - Long Version
Andreas Geiger via YouTube Nvidia Instant-NGP - Create Your Own NeRF Scene From a Video or Images - Hands-On Tutorial
Prodramp via YouTube Turning the Internet Into 3D - 3D Content with NeRF and Gaussian Splatting
Linux Foundation via YouTube