AI Anytime, Anywhere - Getting Started with LLMs on Your Laptop
Offered By: Docker via YouTube
Course Description
Overview
Explore the importance of running large language models locally and gain insights into their functionality in this 47-minute conference talk from DockerCon 2023. Begin with an overview of LLM technology and learn to use the open-source tool Ollama for downloading and running models on your laptop. Discover how to customize models and build AI applications using NodeJS and Python, providing you with practical AI skills for immediate use. Access additional resources on Docker's AI/ML collection, LLM deployment, and the new GenAI stack. Gain a solid foundation in AI development that you can apply right after the session concludes.
Syllabus
AI Anytime, Anywhere: Getting started with LLMs on your Laptop Now (DockerCon 2023)
Taught by
Docker
Related Courses
The GenAI Stack - From Zero to Database-Backed Support BotDocker via YouTube Ollama Crash Course: Running AI Models Locally Offline on CPU
1littlecoder via YouTube Rust Ollama Tutorial - Interfacing with Ollama API Using ollama-rs
Jeremy Chone via YouTube Ollama: Libraries, Vision Models, and OpenAI Compatibility Updates
Sam Witteveen via YouTube Image Annotation with LLaVA and Ollama
Sam Witteveen via YouTube