Parameter Sharing - Recurrent and Convolutional Nets
Offered By: Alfredo Canziani via YouTube
Course Description
Overview
Syllabus
 – Welcome to class
 – Hypernetworks
 – Shared weights
 – Parameter sharing ⇒ adding the gradients
 – Max and sum reductions
 – Recurrent nets
 – Unrolling in time
 – Vanishing and exploding gradients
 – Math on the whiteboard
 – RNN tricks
 – RNN for differential equations
 – GRU
 – What is a memory
 – LSTM – Long Short-Term Memory net
 – Multilayer LSTM
 – Attention for sequence to sequence mapping
 – Convolutional nets
 – Detecting motifs in images
 – Convolution definitions
 – Backprop through convolutions
 – Stride and skip: subsampling and convolution “à trous”
 – Convolutional net architecture
 – Multiple convolutions
 – Vintage ConvNets
 – How does the brain interpret images?
 – Hubel & Wiesel's model of the visual cortex
 – Invariance and equivariance of ConvNets
 – In the next episode…
 – Training time, iteration cycle, and historical remarks
Taught by
Alfredo Canziani
Tags
Related Courses
SIREN - Implicit Neural Representations with Periodic Activation FunctionsYannic Kilcher via YouTube Textual Inversion and Hypernetworks - Stable Diffusion 2
Nerdy Rodent via YouTube Stable Diffusion Style Technique Comparison - Hypernetwork vs. Textual Inversion
kasukanra via YouTube Emergent Hypernetworks in Weakly Coupled Oscillators - IPAM at UCLA
Institute for Pure & Applied Mathematics (IPAM) via YouTube Improving Pareto Front Learning via Multi-Head HyperNetwork
VinAI via YouTube
