Running and Fine-tuning Open Source LLMs on Google Kubernetes Engine
Offered By: Linux Foundation via YouTube
Course Description
Overview
Dive into the world of open-source Large Language Models (LLMs) with this 32-minute conference talk from the Linux Foundation. Learn how to deploy and fine-tune LLMs on Google Kubernetes Engine (GKE) through hands-on demonstrations. Explore containerization techniques, discover optimal GKE cluster configurations for TPU/GPU acceleration, and gain insights into practical fine-tuning approaches tailored to specific use cases. Walk away equipped with code examples and blueprints to kickstart your own LLM-powered projects on Google Cloud infrastructure.
Syllabus
Running and Fine-tuning Open Source LLMs on Google Kubernetes Engine - Nael Fridhi, Google Cloud
Taught by
Linux Foundation
Tags
Related Courses
Fundamentals of Containers, Kubernetes, and Red Hat OpenShiftRed Hat via edX Configuration Management for Containerized Delivery
Microsoft via edX Getting Started with Google Kubernetes Engine - Español
Google Cloud via Coursera Getting Started with Google Kubernetes Engine - 日本語版
Google Cloud via Coursera Architecting with Google Kubernetes Engine: Foundations en Español
Google Cloud via Coursera