Running and Fine-tuning Open Source LLMs on Google Kubernetes Engine
Offered By: Linux Foundation via YouTube
Course Description
Overview
Dive into the world of open-source Large Language Models (LLMs) with this 32-minute conference talk from the Linux Foundation. Learn how to deploy and fine-tune LLMs on Google Kubernetes Engine (GKE) through hands-on demonstrations. Explore containerization techniques, discover optimal GKE cluster configurations for TPU/GPU acceleration, and gain insights into practical fine-tuning approaches tailored to specific use cases. Walk away equipped with code examples and blueprints to kickstart your own LLM-powered projects on Google Cloud infrastructure.
Syllabus
Running and Fine-tuning Open Source LLMs on Google Kubernetes Engine - Nael Fridhi, Google Cloud
Taught by
Linux Foundation
Tags
Related Courses
Google Cloud Fundamentals: Core InfrastructureGoogle via Coursera Google Cloud Big Data and Machine Learning Fundamentals
Google Cloud via Coursera Serverless Data Analysis with Google BigQuery and Cloud Dataflow en Français
Google Cloud via Coursera Essential Google Cloud Infrastructure: Foundation
Google Cloud via Coursera Elastic Google Cloud Infrastructure: Scaling and Automation
Google Cloud via Coursera