Running and Fine-tuning Open Source LLMs on Google Kubernetes Engine
Offered By: Linux Foundation via YouTube
Course Description
Overview
Dive into the world of open-source Large Language Models (LLMs) with this 32-minute conference talk from the Linux Foundation. Learn how to deploy and fine-tune LLMs on Google Kubernetes Engine (GKE) through hands-on demonstrations. Explore containerization techniques, discover optimal GKE cluster configurations for TPU/GPU acceleration, and gain insights into practical fine-tuning approaches tailored to specific use cases. Walk away equipped with code examples and blueprints to kickstart your own LLM-powered projects on Google Cloud infrastructure.
Syllabus
Running and Fine-tuning Open Source LLMs on Google Kubernetes Engine - Nael Fridhi, Google Cloud
Taught by
Linux Foundation
Tags
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent