YoVDO

Understanding and Improving Efficient Language Models

Offered By: Simons Institute via YouTube

Tags

Language Models Courses Machine Learning Courses Computational Models Courses Transformers Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the computational challenges and advancements in efficient language models through this insightful talk by Simran Arora from Stanford University. Delve into the bottlenecks of machine learning, particularly in modeling text, code, and DNA, and understand the limitations of the Transformer architecture. Discover the concept of associative recall (AR) and its significant impact on language modeling quality. Learn about the research findings that explain the tradeoffs between Transformers and efficient language models. Gain insights into new hardware-efficient ML architectures, such as BASED and JRT, which push the boundaries of quality-efficiency tradeoffs in language modeling. Examine the potential for more resource-efficient approaches to unlock the full potential of machine learning across various domains.

Syllabus

Understanding and Improving Efficient Language Models


Taught by

Simons Institute

Related Courses

Microsoft Bot Framework and Conversation as a Platform
Microsoft via edX
Unlocking the Power of OpenAI for Startups - Microsoft for Startups
Microsoft via YouTube
Improving Customer Experiences with Speech to Text and Text to Speech
Microsoft via YouTube
Stanford Seminar - Deep Learning in Speech Recognition
Stanford University via YouTube
Select Topics in Python: Natural Language Processing
Codio via Coursera