YoVDO

Running Gemma Using HuggingFace Transformers and Ollama

Offered By: Sam Witteveen via YouTube

Tags

Machine Learning Courses Keras Courses Model Deployment Courses Ollama Courses Gemma Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore how to run Gemma using HuggingFace Transformers or Ollama in this informative video tutorial. Learn about different implementation options including Ollama, Keras, and gemma.cpp. Follow along with code examples and step-by-step instructions for using Gemma with both HuggingFace and Ollama platforms. Access provided resources like Colab notebooks, GitHub repositories, and official documentation to enhance your understanding and practical application of Gemma models.

Syllabus

Intro
Gemma + Ollama
Gemma + Keras
gemma.cpp
Gemma using Hugging Face
Gemma using Ollama edited


Taught by

Sam Witteveen

Related Courses

The GenAI Stack - From Zero to Database-Backed Support Bot
Docker via YouTube
Ollama Crash Course: Running AI Models Locally Offline on CPU
1littlecoder via YouTube
AI Anytime, Anywhere - Getting Started with LLMs on Your Laptop
Docker via YouTube
Rust Ollama Tutorial - Interfacing with Ollama API Using ollama-rs
Jeremy Chone via YouTube
Ollama: Libraries, Vision Models, and OpenAI Compatibility Updates
Sam Witteveen via YouTube