YoVDO

Demystifying Mixtral of Experts - Stanford CS25 Lecture

Offered By: Stanford University via YouTube

Tags

Language Models Courses Artificial Intelligence Courses Machine Learning Courses Deep Learning Courses Neural Networks Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intricacies of Mixtral 8x7B, a Sparse Mixture of Experts (SMoE) language model, in this illuminating lecture from Stanford's CS25 course. Delve into the architectural details of Mixtral, which builds upon the Mistral 7B framework but incorporates 8 feedforward blocks (experts) in each layer. Discover how the model's router network selects and combines outputs from two experts per token at each layer, allowing access to 47B parameters while actively using only 13B during inference. Gain insights into the expert routing decisions and their implications. Presented by Albert Jiang, an AI scientist at Mistral AI and PhD student at Cambridge University, this talk offers a deep dive into cutting-edge language model architecture and its applications in pretraining and reasoning.

Syllabus

Stanford CS25: V4 I Demystifying Mixtral of Experts


Taught by

Stanford Online

Tags

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera
Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera
Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera
Leading Ambitious Teaching and Learning
Microsoft via edX