Limitations of Large Language Models
Offered By: GERAD Research Center via YouTube
Course Description
Overview
Explore the limitations of large language models (LLMs) in this insightful DS4DM Coffee Talk presented by Sarath Chandar from Polytechnique Montréal, Canada. Delve into the effects of using LLMs as task solvers, examining the types of knowledge they can encode and their efficiency in utilizing this knowledge for downstream tasks. Investigate the susceptibility of LLMs to catastrophic forgetting when learning multiple tasks, and learn about methods for identifying and eliminating biases encoded within these models. Gain a comprehensive overview of various research projects addressing these critical questions, shedding light on the current limitations of LLMs and providing insights into building more intelligent systems for the future.
Syllabus
Limitations of Large Language Models, Sarath Chandar
Taught by
GERAD Research Center
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent