Extracting Information from Text into Memory for Knowledge-Intensive Tasks
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Explore the challenges and potential solutions for knowledge-intensive natural language processing tasks in this 51-minute lecture by Fei Sha from the Center for Language & Speech Processing at Johns Hopkins University. Delve into the limitations of current large language models in storing and accessing factual information reliably. Discover alternative architectures that incorporate dedicated memory components to improve performance on tasks requiring reasoning over linguistic expressions. Learn about specific approaches like dedicated memory controllers, pretraining paths, and multi-read techniques. Examine case studies in narrative question answering and other applications to see how these methods perform in practice. Gain insights into the future directions of NLP research aimed at enhancing models' ability to extract, represent, and utilize knowledge from text.
Syllabus
Introduction
Large language models as knowledge sources
Problems with large language models
Dedicated memory controller
Modern models
Pretraining path
Masks
Entry Point
Motivation Example
Intuition
Variants
How to get auxiliary information
Read twice
First read
Representation of multiwork phrases
Narrative QA
Results
Action Apps
Memory
Motivation
Question Answering
Modeling Challenge
Evaluation
Memory Size
First Example
Retrieval
Questions
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
Introduction to LogicStanford University via Coursera Think Again: How to Reason and Argue
Duke University via Coursera Public Speaking
University of Washington via edX Artificial Intelligence: Knowledge Representation And Reasoning
Indian Institute of Technology Madras via Swayam APĀ® Psychology - Course 3: How the Mind Works
The University of British Columbia via edX