YoVDO

Using Ollama to Run Local LLMs on the Steam Deck - Performance Comparison with Raspberry Pi 5

Offered By: Ian Wootten via YouTube

Tags

Ollama Courses Linux Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the process of installing and running Ollama, a tool for local large language models, on the Steam Deck gaming device. Learn how to set up Ollama, execute various models, and compare performance to the Raspberry Pi 5. Gain insights into the Steam Deck's capabilities as a fully-fledged PC beyond gaming. Follow along with the installation steps, observe model run demonstrations, and discover the potential of running AI models on portable gaming hardware.

Syllabus

Intro
Installation
Model Runs
Conclusion


Taught by

Ian Wootten

Related Courses

The GenAI Stack - From Zero to Database-Backed Support Bot
Docker via YouTube
Ollama Crash Course: Running AI Models Locally Offline on CPU
1littlecoder via YouTube
AI Anytime, Anywhere - Getting Started with LLMs on Your Laptop
Docker via YouTube
Rust Ollama Tutorial - Interfacing with Ollama API Using ollama-rs
Jeremy Chone via YouTube
Ollama: Libraries, Vision Models, and OpenAI Compatibility Updates
Sam Witteveen via YouTube