Working with Gemma and Open LLMs on Google Kubernetes Engine
Offered By: Linux Foundation via YouTube
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the Gemma family of open models and learn how to fine-tune them on custom datasets for various tasks like text generation, translation, and summarization in this hands-on workshop. Discover how to combine Gemma with Kubernetes to leverage open source AI innovations with scalability, reliability, and ease of management. Through guided exercises, gain practical experience in working with Gemma and fine-tuning it on a Kubernetes cluster. Investigate options for serving Gemma on Kubernetes using accelerators and open source tools, enhancing your skills in deploying and managing large language models in a scalable environment.
Syllabus
Workshop: Working with Gemma and Open LLMs on Google Kubernetes... - Abdel Sghiouar & Victor Dantas
Taught by
Linux Foundation
Tags
Related Courses
Amazon SageMaker JumpStart Foundations (Japanese)Amazon Web Services via AWS Skill Builder AWS Flash - Generative AI with Diffusion Models
Amazon Web Services via AWS Skill Builder AWS Flash - Operationalize Generative AI Applications (FMOps/LLMOps)
Amazon Web Services via AWS Skill Builder AWS SimuLearn: Automate Fine-Tuning of an LLM
Amazon Web Services via AWS Skill Builder AWS SimuLearn: Fine-Tune a Base Model with RLHF
Amazon Web Services via AWS Skill Builder