YoVDO

Priors for Semantic Variables - Yoshua Bengio

Offered By: Institute for Advanced Study via YouTube

Tags

Theoretical Machine Learning Courses Causality Courses Attention Mechanisms Courses Parameterization Courses

Course Description

Overview

Explore the frontiers of machine learning in this seminar on Theoretical Machine Learning, featuring renowned researcher Yoshua Bengio from Université de Montréal. Delve into the concept of priors for semantic variables and examine the limitations of current machine learning approaches. Gain insights into systematization, learning theory, and the role of conscious processing in AI. Investigate the differences between System 1 and System 2 thinking, and explore knowledge representation, attention mechanisms, and the Global Workspace Theory. Discover the importance of causality, the Independent Mechanism Hypothesis, and localized changes in AI systems. Learn about parameterization, multitask learning, and modular recurrent networks as Bengio shares cutting-edge ideas on advancing machine learning capabilities.

Syllabus

Introduction
Limitations of machine learning
Systematization
Learning theory
Conscious processing
Agency
System 1 vs System 2
The Kind of Knowledge
Knowledge Representation
Attention
Recurrent Independent Mechanisms
Global Workspace Theory
Attention Mechanisms
Causality
Independent Mechanism Hypothesis
Localized Changes
Parameterization
Multitask learning
Modular recurrent net


Taught by

Institute for Advanced Study

Related Courses

Deep Learning for Natural Language Processing
University of Oxford via Independent
Sequence Models
DeepLearning.AI via Coursera
Deep Learning Part 1 (IITM)
Indian Institute of Technology Madras via Swayam
Deep Learning - Part 1
Indian Institute of Technology, Ropar via Swayam
Deep Learning - IIT Ropar
Indian Institute of Technology, Ropar via Swayam