YoVDO

Pioneering a Hybrid SSM Transformer Architecture - Jamba Foundation Model

Offered By: Databricks via YouTube

Tags

Transformers Courses Machine Learning Courses Neural Networks Courses Mixture-of-Experts Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a groundbreaking conference talk on the development of Jamba, a novel Foundation Model based on a hybrid Transformer-Mamba mixture-of-experts (MoE) architecture. Delve into the decision-making process behind creating this innovative hybrid structure, and gain insights into its layered composition of SSM, Transformer, and MoE components. Learn how this flexible architecture enables resource- and objective-specific configurations, offering unprecedented throughput and the largest context window of 256K in its size class, while fitting 140K on a single GPU. Discover how Jamba introduces a paradigm shift in large language model development, presented by AI21 Labs CTO Barak Lenz. Access additional resources on LLM and MLOps, and connect with Databricks through various social media platforms for further exploration of cutting-edge AI technologies.

Syllabus

Pioneering a Hybrid SSM Transformer Architecture


Taught by

Databricks

Related Courses

Building Intelligent Systems with MoE, Multimodality, and Crowd of Agents
Data Science Festival via YouTube
Stanford Seminar - Mixture of Experts Paradigm and the Switch Transformer
Stanford University via YouTube
Decoding Mistral AI's Large Language Models - Building Blocks and Training Strategies
Databricks via YouTube
Developing and Serving RAG-Based LLM Applications in Production
Anyscale via YouTube
GenAI in Production - Challenges and Trends - Lecture 224
MLOps.community via YouTube