YoVDO

Llama 3.2 - Multimodal and Edge Computing Advancements

Offered By: Sam Witteveen via YouTube

Tags

LLaMA (Large Language Model Meta AI) Courses Computer Vision Courses Edge Computing Courses Distillation Courses Multimodal AI Courses Model Compression Courses Hugging Face Courses Ollama Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the latest developments in Llama 3.2, including its multimodal capabilities and edge computing applications, in this 13-minute video. Dive into the details of Meta's blog post announcing Llama 3.2, examining vision instruction-tuned benchmarks and lightweight instruction-tuned models. Understand the pruning and distillation process through a detailed diagram. Watch a demonstration of Llama 3.2 in action and learn about the Llama Stack APIs. Discover where to find Llama 3.2 models on Hugging Face and how to use them with Ollama. For those interested in building LLM Agents, a form link is provided for further exploration. Access additional resources, including GitHub repositories for langchain and LLM tutorials.

Syllabus

Intro
Llama 3.2
Llama 3.2 Blog
Llama 3:2 Vision Instruction-tuned Benchmarks
Llama 3.2 Lightweight Instruction-tuned Benchmarks
Llama 3.2 Pruning and Distillations Diagram
Llama 3.2 Demo
Llama Stack APIs
Llama 3.2 Models on Hugging Face
Llama 3.2 on Ollama
Outro


Taught by

Sam Witteveen

Related Courses

Generative AI, from GANs to CLIP, with Python and Pytorch
Udemy
ODSC East 2022 Keynote by Luis Vargas, Ph.D. - The Big Wave of AI at Scale
Open Data Science via YouTube
Comparing AI Image Caption Models: GIT, BLIP, and ViT+GPT2
1littlecoder via YouTube
In Conversation with the Godfather of AI
Collision Conference via YouTube
LLaVA: The New Open Access Multimodal AI Model
1littlecoder via YouTube