The Unreasonable Effectiveness of RNNs - Article and Visualization Commentary
Offered By: Jay Alammar via YouTube
Course Description
Overview
Explore a comprehensive commentary on Andrej Karpathy's influential 2015 article "The Unreasonable Effectiveness of Recurrent Neural Networks." Delve into the groundbreaking developments in sequence-to-sequence models that paved the way for modern NLP advancements like GPT-3. Learn about character-level language models, various RNN types, and their applications. Examine prediction and activation visualizations, neuron behavior, and subsequent related work in the field. Gain insights into how this article helped shape the tech community's understanding of machine learning's potential in handling text data.
Syllabus
Introduction
Character-level language models
RNN types figure
Fun with RNNs
Prediction and activation visualization 1
Neuron visualization
Subsequent related work
Taught by
Jay Alammar
Related Courses
Intro to Deep Learning with PyTorchFacebook via Udacity Natural Language Processing with Sequence Models
DeepLearning.AI via Coursera Deep Learning
Universidad AnĂ¡huac via edX Create a Superhero Name Generator with TensorFlow
Coursera Project Network via Coursera Natural Language Generation in Python
DataCamp