Parameter Sharing - Recurrent and Convolutional Nets
Offered By: Alfredo Canziani via YouTube
Course Description
Overview
Syllabus
– Welcome to class
– Hypernetworks
– Shared weights
– Parameter sharing ⇒ adding the gradients
– Max and sum reductions
– Recurrent nets
– Unrolling in time
– Vanishing and exploding gradients
– Math on the whiteboard
– RNN tricks
– RNN for differential equations
– GRU
– What is a memory
– LSTM – Long Short-Term Memory net
– Multilayer LSTM
– Attention for sequence to sequence mapping
– Convolutional nets
– Detecting motifs in images
– Convolution definitions
– Backprop through convolutions
– Stride and skip: subsampling and convolution “à trous”
– Convolutional net architecture
– Multiple convolutions
– Vintage ConvNets
– How does the brain interpret images?
– Hubel & Wiesel's model of the visual cortex
– Invariance and equivariance of ConvNets
– In the next episode…
– Training time, iteration cycle, and historical remarks
Taught by
Alfredo Canziani
Tags
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn Statistical Learning with R
Stanford University via edX Machine Learning 1—Supervised Learning
Brown University via Udacity Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX