Unraveling Multimodality with Large Language Models
Offered By: Linux Foundation via YouTube
Course Description
Overview
Explore the transformative role of Large Language Models (LLMs) in multimodality through this 38-minute conference talk by Alex Coqueiro from AWS. Gain insights into the contextual foundations and significance of multimodality, covering various data modalities and multimodal tasks. Discover cutting-edge multimodal systems, with a focus on Latent Diffusion Models (LDM) technologies using PyTorch, Langchain, Stable Diffusion, and LLaVA. Examine practical examples demonstrating the integration of multimodality techniques with LLaMa 2, Falcon, and SDXL, showcasing their impact on shaping the multimodal landscape.
Syllabus
Unraveling Multimodality with Large Language Models - Alex Coqueiro, AWS
Taught by
Linux Foundation
Tags
Related Courses
The New AI Model Licenses Have a Legal Loophole - OpenRAIL-M of BLOOM, Stable Diffusion, etc.Yannic Kilcher via YouTube Stable Diffusion - Master AI Art: Installation, Prompts, Txt2img-Img2img, Out-Inpaint and Resize Tutorial
ChamferZone via YouTube Get Started With Stable Diffusion - Code, HF Spaces, Diffusers Notebooks
Aleksa Gordić - The AI Epiphany via YouTube Stable Diffusion Animation Tutorial - Deforum All Settings Explained - Make Your Own AI Video
Sebastian Kamph via YouTube Stable Diffusion - What, Why, How?
Edan Meyer via YouTube