MIT 6.S191 - Recurrent Neural Networks
Offered By: Alexander Amini via YouTube
Course Description
Overview
Dive into the world of Recurrent Neural Networks in this comprehensive lecture from MIT's Introduction to Deep Learning course. Explore sequence modeling, neurons with recurrence, and the intuition behind RNNs. Learn how to unfold RNNs, build them from scratch, and understand the design criteria for sequential modeling. Discover practical applications through a word prediction example, and delve into advanced concepts like backpropagation through time and gradient issues. Gain insights into Long Short-Term Memory (LSTM) networks, various RNN applications, and the powerful attention mechanism. By the end of this hour-long session, you'll have a solid foundation in RNNs and their role in deep learning.
Syllabus
- Introduction
- Sequence modeling
- Neurons with recurrence
- Recurrent neural networks
- RNN intuition
- Unfolding RNNs
- RNNs from scratch
- Design criteria for sequential modelling
- Word prediction example
- Backpropagation through time
- Gradient issues
- Long short term memory LSTM
- RNN applications
- Attention
- Summary
Taught by
https://www.youtube.com/@AAmini/videos
Tags
Related Courses
Reinforcement Learning for Trading StrategiesNew York Institute of Finance via Coursera Natural Language Processing with Sequence Models
DeepLearning.AI via Coursera Fake News Detection with Machine Learning
Coursera Project Network via Coursera English/French Translator: Long Short Term Memory Networks
Coursera Project Network via Coursera Text Classification Using Word2Vec and LSTM on Keras
Coursera Project Network via Coursera