YoVDO

Transformer and Large Language Models - Part I - Lecture 12

Offered By: MIT HAN Lab via YouTube

Tags

Transformers Courses Machine Learning Courses Attention Mechanisms Courses Self-Attention Courses Positional Encoding Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Dive into the world of Transformers and Large Language Models (LLMs) with this comprehensive lecture from MIT's 6.5940 course. Explore the fundamental concepts, architectures, and applications of these groundbreaking technologies in natural language processing and beyond. Learn from Professor Song Han as he delves into the intricacies of attention mechanisms, self-attention, and the overall structure of Transformer models. Gain insights into the scaling laws of LLMs and understand their impact on various domains. Discover the challenges and opportunities in developing efficient and powerful language models. Access accompanying slides for visual aids and additional resources to enhance your understanding of this cutting-edge field in artificial intelligence and machine learning.

Syllabus

EfficientML.ai Lecture 12 - Transformer and LLM (Part I) (MIT 6.5940, Fall 2023)


Taught by

MIT HAN Lab

Related Courses

Deep Learning for Natural Language Processing
University of Oxford via Independent
Sequence Models
DeepLearning.AI via Coursera
Deep Learning Part 1 (IITM)
Indian Institute of Technology Madras via Swayam
Deep Learning - Part 1
Indian Institute of Technology, Ropar via Swayam
Deep Learning - IIT Ropar
Indian Institute of Technology, Ropar via Swayam