Using Ollama to Run Local LLMs on the Steam Deck - Performance Comparison with Raspberry Pi 5
Offered By: Ian Wootten via YouTube
Course Description
Overview
Explore the process of installing and running Ollama, a tool for local large language models, on the Steam Deck gaming device. Learn how to set up Ollama, execute various models, and compare performance to the Raspberry Pi 5. Gain insights into the Steam Deck's capabilities as a fully-fledged PC beyond gaming. Follow along with the installation steps, observe model run demonstrations, and discover the potential of running AI models on portable gaming hardware.
Syllabus
Intro
Installation
Model Runs
Conclusion
Taught by
Ian Wootten
Related Courses
Introduction to LinuxLinux Foundation via edX 操作系统原理(Operating Systems)
Peking University via Coursera Internet of Things: Setting Up Your DragonBoard™ Development Platform
University of California, San Diego via Coursera Information Security-3
Indian Institute of Technology Madras via Swayam Introduction to Embedded Systems Software and Development Environments
University of Colorado Boulder via Coursera