YoVDO

LLMs and Transformers Demystified: Introduction to AI Engineering - Lecture 1

Offered By: Data Centric via YouTube

Tags

Transformers Courses Artificial Intelligence Courses Machine Learning Courses Deep Learning Courses Neural Networks Courses Attention Mechanisms Courses Encoder-Decoder Architecture Courses Self-Attention Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Dive into the world of artificial intelligence with this comprehensive lecture on Large Language Models (LLMs) and Transformers. Explore the groundbreaking 'Attention is All You Need' research paper and its impact on AI development. Learn about the transformer architecture, a key component in many LLMs powering applications like Chat-GPT. Gain essential context for engaging with LLMs through an intuitive weather analogy, making complex concepts accessible. Discover the transformative power of the transformer architecture and its profound influence on AI advancements. Follow along as the lecture covers topics such as the history of transformers, the structure of "Transformer City," detailed explanations of encoder and decoder towers, and an analogy for model training. Perfect for AI engineers, researchers, and enthusiasts looking to demystify the complexities of transformers and enhance their understanding of cutting-edge AI technologies.

Syllabus

Introduction – Intro to Transformers & LLMs
Attention is All you Need – History of Transformers
Welcome to Transformer City
Geographic Map of Transformer City
Encoder Tower in Detail
Encoder & Decoder Towers Working Together
Decoder Tower in Detail
Training our Scientists to forecast model training analogy
Outro


Taught by

Data Centric

Related Courses

Transformers: Text Classification for NLP Using BERT
LinkedIn Learning
TensorFlow: Working with NLP
LinkedIn Learning
TransGAN - Two Transformers Can Make One Strong GAN - Machine Learning Research Paper Explained
Yannic Kilcher via YouTube
Nyströmformer- A Nyström-Based Algorithm for Approximating Self-Attention
Yannic Kilcher via YouTube
Recreate Google Translate - Model Training
Edan Meyer via YouTube