YoVDO

Recurrent Neural Networks and Models of Computation - Edward Grefenstette, DeepMind

Offered By: Alan Turing Institute via YouTube

Tags

Machine Learning Courses Logic Courses Attention Mechanisms Courses

Course Description

Overview

Explore the intersection of recurrent neural networks and traditional models of computation in this insightful talk by Edward Grefenstette from DeepMind. Delve into the analysis of various recurrent architectures, comparing simpler models to finite state automata and examining how memory-enhanced structures improve algorithmic efficiency. Investigate sequence translation, learning to execute, language modeling, and experiments in computational hierarchy. Discover the role of attention mechanisms, readonly memory, and architectural bias in neural networks. Gain a deeper understanding of the relationship between logic and learning in complex systems, and how combining these approaches can lead to powerful solutions in artificial intelligence and formal reasoning.

Syllabus

Introduction
Sequence Translation
Learning to Execute
Language Modeling
Experiments
Computational Hierarchy
Data Efficiency
Recurrent Networks
Attention
Attention to Sequence
Limitations
Readonly memory
Turing machine
Pushdown automata
Architectural Bias
Conclusion


Taught by

Alan Turing Institute

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent