MIT 6.S191 - Recurrent Neural Networks
Offered By: Alexander Amini via YouTube
Course Description
Overview
Dive into the world of Recurrent Neural Networks in this comprehensive lecture from MIT's Introduction to Deep Learning course. Explore sequence modeling, neurons with recurrence, and the intuition behind RNNs. Learn how to unfold RNNs, build them from scratch, and understand the design criteria for sequential modeling. Discover practical applications through a word prediction example, and delve into advanced concepts like backpropagation through time and gradient issues. Gain insights into Long Short-Term Memory (LSTM) networks, various RNN applications, and the powerful attention mechanism. By the end of this hour-long session, you'll have a solid foundation in RNNs and their role in deep learning.
Syllabus
- Introduction
- Sequence modeling
- Neurons with recurrence
- Recurrent neural networks
- RNN intuition
- Unfolding RNNs
- RNNs from scratch
- Design criteria for sequential modelling
- Word prediction example
- Backpropagation through time
- Gradient issues
- Long short term memory LSTM
- RNN applications
- Attention
- Summary
Taught by
https://www.youtube.com/@AAmini/videos
Tags
Related Courses
Simple Recurrent Neural Network with KerasCoursera Project Network via Coursera Deep Learning: Advanced Natural Language Processing and RNNs
Udemy Recurrent Neural Networks (RNNs) for Language Modeling with Keras
DataCamp Deep Learning: Recurrent Neural Networks in Python
Udemy Basics of Deep Learning
Udemy