YoVDO

Transformer and Large Language Models - Part I - Lecture 12

Offered By: MIT HAN Lab via YouTube

Tags

Transformers Courses Machine Learning Courses Attention Mechanisms Courses Self-Attention Courses Positional Encoding Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Dive into the world of Transformers and Large Language Models (LLMs) with this comprehensive lecture from MIT's 6.5940 course. Explore the fundamental concepts, architectures, and applications of these groundbreaking technologies in natural language processing and beyond. Learn from Professor Song Han as he delves into the intricacies of attention mechanisms, self-attention, and the overall structure of Transformer models. Gain insights into the scaling laws of LLMs and understand their impact on various domains. Discover the challenges and opportunities in developing efficient and powerful language models. Access accompanying slides for visual aids and additional resources to enhance your understanding of this cutting-edge field in artificial intelligence and machine learning.

Syllabus

EfficientML.ai Lecture 12 - Transformer and LLM (Part I) (MIT 6.5940, Fall 2023)


Taught by

MIT HAN Lab

Related Courses

Models and Platforms for Generative AI
IBM via edX
Natural Language Processing with Attention Models
DeepLearning.AI via Coursera
Circuitos con SPICE: Sistemas trifásicos y análisis avanzado
Pontificia Universidad Católica de Chile via Coursera
Linear Circuits
Georgia Institute of Technology via Coursera
Intro to AI Transformers
Codecademy