Recurrent Neural Networks, Transformers, and Attention
Offered By: Alexander Amini via YouTube
Course Description
Overview
Dive into the world of advanced deep learning techniques with this comprehensive lecture from MIT's Introduction to Deep Learning course. Explore the intricacies of Recurrent Neural Networks (RNNs), Transformers, and Attention mechanisms. Begin with an introduction to sequence modeling and neurons with recurrence, then delve into the fundamentals of RNNs, including their intuition and unfolding process. Learn how to build RNNs from scratch and understand the design criteria for sequential modeling through a word prediction example. Discover the backpropagation through time algorithm and address gradient issues in RNNs. Investigate Long Short-Term Memory (LSTM) networks and their applications. Finally, uncover the power of attention mechanisms, their intuition, relationship to search, and implementation in neural networks. Gain insights into scaling attention and its various applications in deep learning.
Syllabus
- Introduction
- Sequence modeling
- Neurons with recurrence
- Recurrent neural networks
- RNN intuition
- Unfolding RNNs
- RNNs from scratch
- Design criteria for sequential modeling
- Word prediction example
- Backpropagation through time
- Gradient issues
- Long short term memory LSTM
- RNN applications
- Attention fundamentals
- Intuition of attention
- Attention and search relationship
- Learning attention with neural networks
- Scaling attention and applications
- Summary
Taught by
https://www.youtube.com/@AAmini/videos
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX