Running Gemma Using HuggingFace Transformers and Ollama
Offered By: Sam Witteveen via YouTube
Course Description
Overview
Explore how to run Gemma using HuggingFace Transformers or Ollama in this informative video tutorial. Learn about different implementation options including Ollama, Keras, and gemma.cpp. Follow along with code examples and step-by-step instructions for using Gemma with both HuggingFace and Ollama platforms. Access provided resources like Colab notebooks, GitHub repositories, and official documentation to enhance your understanding and practical application of Gemma models.
Syllabus
Intro
Gemma + Ollama
Gemma + Keras
gemma.cpp
Gemma using Hugging Face
Gemma using Ollama edited
Taught by
Sam Witteveen
Related Courses
The GenAI Stack - From Zero to Database-Backed Support BotDocker via YouTube Ollama Crash Course: Running AI Models Locally Offline on CPU
1littlecoder via YouTube AI Anytime, Anywhere - Getting Started with LLMs on Your Laptop
Docker via YouTube Rust Ollama Tutorial - Interfacing with Ollama API Using ollama-rs
Jeremy Chone via YouTube Ollama: Libraries, Vision Models, and OpenAI Compatibility Updates
Sam Witteveen via YouTube