YoVDO

Limitations of Large Language Models

Offered By: GERAD Research Center via YouTube

Tags

Artificial Intelligence Courses Machine Learning Courses Computer Vision Courses Catastrophic Forgetting Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the limitations of large language models (LLMs) in this insightful DS4DM Coffee Talk presented by Sarath Chandar from Polytechnique Montréal, Canada. Delve into the effects of using LLMs as task solvers, examining the types of knowledge they can encode and their efficiency in utilizing this knowledge for downstream tasks. Investigate the susceptibility of LLMs to catastrophic forgetting when learning multiple tasks, and learn about methods for identifying and eliminating biases encoded within these models. Gain a comprehensive overview of various research projects addressing these critical questions, shedding light on the current limitations of LLMs and providing insights into building more intelligent systems for the future.

Syllabus

Limitations of Large Language Models, Sarath Chandar


Taught by

GERAD Research Center

Related Courses

Active Dendrites Avoid Catastrophic Forgetting - Interview With the Authors
Yannic Kilcher via YouTube
Avoiding Catastrophe - Active Dendrites Enable Multi-Task Learning in Dynamic Environments
Yannic Kilcher via YouTube
Supermasks in Superposition - Paper Explained
Yannic Kilcher via YouTube
What Kind of AI Can Help Manufacturing Adapt to a Pandemic
Open Data Science via YouTube
Rethinking Architecture Design for Data Heterogeneity in FL - Liangqiong Qu
Stanford University via YouTube