YoVDO

Unraveling Multimodality with Large Language Models

Offered By: Linux Foundation via YouTube

Tags

PyTorch Courses Stable Diffusion Courses LangChain Courses LLaVA Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the transformative role of Large Language Models (LLMs) in multimodality through this 38-minute conference talk by Alex Coqueiro from AWS. Gain insights into the contextual foundations and significance of multimodality, covering various data modalities and multimodal tasks. Discover cutting-edge multimodal systems, with a focus on Latent Diffusion Models (LDM) technologies using PyTorch, Langchain, Stable Diffusion, and LLaVA. Examine practical examples demonstrating the integration of multimodality techniques with LLaMa 2, Falcon, and SDXL, showcasing their impact on shaping the multimodal landscape.

Syllabus

Unraveling Multimodality with Large Language Models - Alex Coqueiro, AWS


Taught by

Linux Foundation

Tags

Related Courses

LLaVA: The New Open Access Multimodal AI Model
1littlecoder via YouTube
Autogen and Local LLMs Create Realistic Stable Diffusion Model Autonomously
kasukanra via YouTube
Image Annotation with LLaVA and Ollama
Sam Witteveen via YouTube
Efficient and Portable AI/LLM Inference on the Edge Cloud - Workshop
Linux Foundation via YouTube
Training and Serving Custom Multi-modal Models - IDEFICS 2 and LLaVA Llama 3
Trelis Research via YouTube