Understanding and Improving Efficient Language Models
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore the computational challenges and advancements in efficient language models through this insightful talk by Simran Arora from Stanford University. Delve into the bottlenecks of machine learning, particularly in modeling text, code, and DNA, and understand the limitations of the Transformer architecture. Discover the concept of associative recall (AR) and its significant impact on language modeling quality. Learn about the research findings that explain the tradeoffs between Transformers and efficient language models. Gain insights into new hardware-efficient ML architectures, such as BASED and JRT, which push the boundaries of quality-efficiency tradeoffs in language modeling. Examine the potential for more resource-efficient approaches to unlock the full potential of machine learning across various domains.
Syllabus
Understanding and Improving Efficient Language Models
Taught by
Simons Institute
Related Courses
Linear CircuitsGeorgia Institute of Technology via Coursera مقدمة في هندسة الطاقة والقوى
King Abdulaziz University via Rwaq (رواق) Magnetic Materials and Devices
Massachusetts Institute of Technology via edX Linear Circuits 2: AC Analysis
Georgia Institute of Technology via Coursera Transmisión de energía eléctrica
Tecnológico de Monterrey via edX