YoVDO

The Unreasonable Effectiveness of RNNs - Article and Visualization Commentary

Offered By: Jay Alammar via YouTube

Tags

Recurrent Neural Networks (RNN) Courses Machine Learning Courses Text Generation Courses

Course Description

Overview

Explore a comprehensive commentary on Andrej Karpathy's influential 2015 article "The Unreasonable Effectiveness of Recurrent Neural Networks." Delve into the groundbreaking developments in sequence-to-sequence models that paved the way for modern NLP advancements like GPT-3. Learn about character-level language models, various RNN types, and their applications. Examine prediction and activation visualizations, neuron behavior, and subsequent related work in the field. Gain insights into how this article helped shape the tech community's understanding of machine learning's potential in handling text data.

Syllabus

Introduction
Character-level language models
RNN types figure
Fun with RNNs
Prediction and activation visualization 1
Neuron visualization
Subsequent related work


Taught by

Jay Alammar

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent