YoVDO

The Unreasonable Effectiveness of RNNs - Article and Visualization Commentary

Offered By: Jay Alammar via YouTube

Tags

Recurrent Neural Networks (RNN) Courses Machine Learning Courses Text Generation Courses

Course Description

Overview

Explore a comprehensive commentary on Andrej Karpathy's influential 2015 article "The Unreasonable Effectiveness of Recurrent Neural Networks." Delve into the groundbreaking developments in sequence-to-sequence models that paved the way for modern NLP advancements like GPT-3. Learn about character-level language models, various RNN types, and their applications. Examine prediction and activation visualizations, neuron behavior, and subsequent related work in the field. Gain insights into how this article helped shape the tech community's understanding of machine learning's potential in handling text data.

Syllabus

Introduction
Character-level language models
RNN types figure
Fun with RNNs
Prediction and activation visualization 1
Neuron visualization
Subsequent related work


Taught by

Jay Alammar

Related Courses

Simple Recurrent Neural Network with Keras
Coursera Project Network via Coursera
Deep Learning: Advanced Natural Language Processing and RNNs
Udemy
Recurrent Neural Networks (RNNs) for Language Modeling with Keras
DataCamp
Deep Learning: Recurrent Neural Networks in Python
Udemy
Basics of Deep Learning
Udemy