YoVDO

Priors for Semantic Variables - Yoshua Bengio

Offered By: Institute for Advanced Study via YouTube

Tags

Theoretical Machine Learning Courses Causality Courses Attention Mechanisms Courses Parameterization Courses

Course Description

Overview

Explore the frontiers of machine learning in this seminar on Theoretical Machine Learning, featuring renowned researcher Yoshua Bengio from Université de Montréal. Delve into the concept of priors for semantic variables and examine the limitations of current machine learning approaches. Gain insights into systematization, learning theory, and the role of conscious processing in AI. Investigate the differences between System 1 and System 2 thinking, and explore knowledge representation, attention mechanisms, and the Global Workspace Theory. Discover the importance of causality, the Independent Mechanism Hypothesis, and localized changes in AI systems. Learn about parameterization, multitask learning, and modular recurrent networks as Bengio shares cutting-edge ideas on advancing machine learning capabilities.

Syllabus

Introduction
Limitations of machine learning
Systematization
Learning theory
Conscious processing
Agency
System 1 vs System 2
The Kind of Knowledge
Knowledge Representation
Attention
Recurrent Independent Mechanisms
Global Workspace Theory
Attention Mechanisms
Causality
Independent Mechanism Hypothesis
Localized Changes
Parameterization
Multitask learning
Modular recurrent net


Taught by

Institute for Advanced Study

Related Courses

Epidemiology: The Basic Science of Public Health
The University of North Carolina at Chapel Hill via Coursera
Algorithmic Information Dynamics: From Networks to Cells
Santa Fe Institute via Complexity Explorer
Environmental Challenges: Human Impact in the Natural Environment
University of Leeds via FutureLearn
Data Analytics for Lean Six Sigma
University of Amsterdam via Coursera
Data Science: Inferential Thinking through Simulations
University of California, Berkeley via edX