YoVDO

Knowledge Is Embedded in Language Neural Networks but Can They Reason?

Offered By: Simons Institute via YouTube

Tags

Transformer Models Courses Deep Learning Courses Neural Networks Courses Self-Attention Mechanisms Courses

Course Description

Overview

Explore the capabilities and limitations of language neural networks in reasoning during this 50-minute lecture by Chris Manning from Stanford University. Delve into the evolution of language models, from n-gram models to modern neural language models like GPT-2. Examine the structure and functionality of transformer models and self-attention mechanisms in sequence modeling. Analyze the strengths of current systems while acknowledging their basic natural language understanding errors. Investigate the concept of reasoning in AI, discussing appropriate structural priors and compositional reasoning trees. Discover emerging research directions, including Neural State Machines, and their potential to enhance AI reasoning abilities. Gain insights into the accuracy of Neural State Machines on visual question-answering tasks like GQA.

Syllabus

Intro
Tree-structured models
Language Modeling
LMs in The Dark Ages: n-gram models
Enlightenment era neural language models (NLMs)
GPT-2 language model (cherry-picked) output
Transformer models
Classic Word Vec (Mikolov et al. 2013)
Self-attention in (masked) sequence model
Good systems are great, but still basic NLU errors
What is Reasoning? Bottou 2011
Appropriate structural priors
Compositional reasoning tree
A 2020s Research Direction
A Neural State Machine
NSM accuracy on GQA


Taught by

Simons Institute

Related Courses

Transformer Models and BERT Model - Locales
Google via Google Cloud Skills Boost
Transformer Models and BERT Model
Pluralsight
Transformer Models and BERT Model - Français
Google Cloud via Coursera
Transformer Models and BERT Model - Português Brasileiro
Google Cloud via Coursera
Transformer Models and BERT Model
Google via Google Cloud Skills Boost