Decoding Mistral AI's Large Language Models - Building Blocks and Training Strategies
Offered By: Databricks via YouTube
Course Description
Overview
Explore the building blocks and training strategies powering Mistral AI's large language models in this 36-minute session presented by Devendra Singh Chaplot, Research Scientist at Mistral AI. Delve into the open-source models Mixtral 8x7B and Mixtral 8x22B, which utilize a mixture-of-experts (MoE) architecture and are released under the Apache 2.0 license. Gain insights on leveraging Mistral "La Plateforme" API endpoints and get a sneak peek at upcoming features. Learn about the latest advancements in language model technology and their practical applications in the field of artificial intelligence.
Syllabus
Decoding Mistral AI's Large Language Models
Taught by
Databricks
Related Courses
GShard- Scaling Giant Models with Conditional Computation and Automatic ShardingYannic Kilcher via YouTube Learning Mixtures of Linear Regressions in Subexponential Time via Fourier Moments
Association for Computing Machinery (ACM) via YouTube Modules and Architectures
Alfredo Canziani via YouTube Stanford Seminar - Mixture of Experts Paradigm and the Switch Transformer
Stanford University via YouTube Pioneering a Hybrid SSM Transformer Architecture - Jamba Foundation Model
Databricks via YouTube