YoVDO

LIMoE- Learning Multiple Modalities with One Sparse Mixture-of-Experts Model

Offered By: Prodramp via YouTube

Tags

Image Processing Courses Performance Evaluation Courses

Course Description

Overview

Explore a 17-minute video delving into LIMoE (Learning Multiple Modalities with One Sparse Mixture-of-Experts Model), a large-scale multimodal architecture that processes both images and text using sparsely activated experts. Gain insights into LIMoE's internal architecture, data processing techniques, and performance. Follow along as the video covers the research paper introduction, key topics, LIMoE internals, training system, multimodal contrastive learning, behavior understanding, and performance analysis. Access additional resources, including GitHub repositories and research papers, to further enhance your understanding of this innovative AI model.

Syllabus

- Research Paper intro
- Topics Covered
- LIMoE Internals
- Training System
- Multimodal Contrastive Learning
- LIMoE Behavior Understanding
- LIMoE Performance
- Conclusion


Taught by

Prodramp

Related Courses

Observing and Analysing Performance in Sport
OpenLearning
Introduction aux réseaux mobiles
Institut Mines-Télécom via France Université Numerique
Claves para Gestionar Personas
IESE Business School via Coursera
الأجهزة الطبية في غرف العمليات والعناية المركزة
Rwaq (رواق)
Clinical Supervision with Confidence
University of East Anglia via FutureLearn