Sample-Complexity of Estimating Convolutional and Recurrent Neural Networks
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore the sample-complexity of estimating convolutional and recurrent neural networks in this 35-minute lecture by Aarti Singh from Carnegie Mellon University. Delve into the Frontiers of Deep Learning as part of the Simons Institute series. Gain insights into FNN, CNN, and RNN architectures, CNN generative models, and minimax analysis. Examine estimator assumptions and main results, followed by a discussion on related work and formal upper and lower bounds. Understand the proof sketch and analyze experiments comparing CNN (with average and weighted pooling) to FNN. Conclude by considering open questions in the field, enhancing your understanding of deep learning complexities and estimation challenges.
Syllabus
Intro
FNN, CNN and RNN architectures
CNN generative models
Minimax analysis
Estimator and Assumptions
Main results (Informal)
Related work
Upper bounds (formal)
Proof sketch
Lower bounds (formal)
Experiments - CNN (average pooling) vs FNN
Experiments - CNN (weighted pooling) vs FNN
Open questions
Taught by
Simons Institute
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX