Llama 3.2 - Multimodal and Edge Computing Advancements
Offered By: Sam Witteveen via YouTube
Course Description
Overview
Explore the latest developments in Llama 3.2, including its multimodal capabilities and edge computing applications, in this 13-minute video. Dive into the details of Meta's blog post announcing Llama 3.2, examining vision instruction-tuned benchmarks and lightweight instruction-tuned models. Understand the pruning and distillation process through a detailed diagram. Watch a demonstration of Llama 3.2 in action and learn about the Llama Stack APIs. Discover where to find Llama 3.2 models on Hugging Face and how to use them with Ollama. For those interested in building LLM Agents, a form link is provided for further exploration. Access additional resources, including GitHub repositories for langchain and LLM tutorials.
Syllabus
Intro
Llama 3.2
Llama 3.2 Blog
Llama 3:2 Vision Instruction-tuned Benchmarks
Llama 3.2 Lightweight Instruction-tuned Benchmarks
Llama 3.2 Pruning and Distillations Diagram
Llama 3.2 Demo
Llama Stack APIs
Llama 3.2 Models on Hugging Face
Llama 3.2 on Ollama
Outro
Taught by
Sam Witteveen
Related Courses
LLaMA- Open and Efficient Foundation Language Models - Paper ExplainedYannic Kilcher via YouTube Alpaca & LLaMA - Can it Compete with ChatGPT?
Venelin Valkov via YouTube Experimenting with Alpaca & LLaMA
Aladdin Persson via YouTube What's LLaMA? ChatLLaMA? - And Some ChatGPT/InstructGPT
Aladdin Persson via YouTube Llama Index - Step by Step Introduction
echohive via YouTube