Machine Reading, Fast and Slow - When Do Models Understand Language?
Offered By: Santa Fe Institute via YouTube
Course Description
Overview
Explore a thought-provoking lecture on the complexities of machine reading comprehension and the challenges in developing AI models that truly "understand" language. Delve into behavioral benchmarks, reading comprehension assessments, and the limitations of current transformer models. Examine why scale alone doesn't solve comprehension issues and how pre-training knowledge is underutilized. Investigate the lack of shortcuts in language processing, data-related problems, and testing methodologies. Conclude by considering open problems in the field of machine reading and language understanding.
Syllabus
Intro
Behavioral Benchmarks
Reading Comprehension
How is it going
Scale alone doesnt solve it
Transformers dont use pretraining knowledge
Lack of shortcuts
Data working problem
Testing
Open Problems
Taught by
Santa Fe Institute
Tags
Related Courses
Sequence ModelsDeepLearning.AI via Coursera Modern Natural Language Processing in Python
Udemy Stanford Seminar - Transformers in Language: The Development of GPT Models Including GPT-3
Stanford University via YouTube Long Form Question Answering in Haystack
James Briggs via YouTube Spotify's Podcast Search Explained
James Briggs via YouTube