YoVDO

Feedback Transformers - Addressing Some Limitations of Transformers with Feedback Memory

Offered By: Yannic Kilcher via YouTube

Tags

Transformer Models Courses Artificial Intelligence Courses Neural Network Architecture Courses

Course Description

Overview

Explore the concept of Feedback Transformers in this 44-minute video lecture. Delve into the limitations of autoregressive Transformers in language modeling and discover how Feedback Transformers address these issues. Learn about information flow in recurrent neural networks and Transformers, complex computations with neural networks, and causal masking. Examine the Feedback Transformer architecture, its connection to Attention-RNNs, and its formal definition. Review experimental results demonstrating the improved performance of this approach in language modeling, machine translation, and reinforcement learning tasks. Gain insights into how Feedback Transformers enhance representation capacity, allowing for smaller, shallower models with stronger performance compared to traditional Transformers.

Syllabus

- Intro & Overview
- Problems of Autoregressive Processing
- Information Flow in Recurrent Neural Networks
- Information Flow in Transformers
- Solving Complex Computations with Neural Networks
- Causal Masking in Transformers
- Missing Higher Layer Information Flow
- Feedback Transformer Architecture
- Connection to Attention-RNNs
- Formal Definition
- Experimental Results
- Conclusion & Comments


Taught by

Yannic Kilcher

Related Courses

Sequence Models
DeepLearning.AI via Coursera
Modern Natural Language Processing in Python
Udemy
Stanford Seminar - Transformers in Language: The Development of GPT Models Including GPT-3
Stanford University via YouTube
Long Form Question Answering in Haystack
James Briggs via YouTube
Spotify's Podcast Search Explained
James Briggs via YouTube