YoVDO

Understanding and Improving Efficient Language Models

Offered By: Simons Institute via YouTube

Tags

Language Models Courses Machine Learning Courses Computational Models Courses Transformers Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the computational challenges and advancements in efficient language models through this insightful talk by Simran Arora from Stanford University. Delve into the bottlenecks of machine learning, particularly in modeling text, code, and DNA, and understand the limitations of the Transformer architecture. Discover the concept of associative recall (AR) and its significant impact on language modeling quality. Learn about the research findings that explain the tradeoffs between Transformers and efficient language models. Gain insights into new hardware-efficient ML architectures, such as BASED and JRT, which push the boundaries of quality-efficiency tradeoffs in language modeling. Examine the potential for more resource-efficient approaches to unlock the full potential of machine learning across various domains.

Syllabus

Understanding and Improving Efficient Language Models


Taught by

Simons Institute

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent