Creating Your Own LLM Tuning Platform with Open Source Technologies
Offered By: DevConf via YouTube
Course Description
Overview
Explore the world of Large Language Model (LLM) tuning in this 35-minute conference talk from DevConf.US 2024. Dive into FMS HF Tuning, an open-source package by IBM that leverages HuggingFace's Supervised Fine-tuning Trainer to support multiple LLM tuning techniques. Gain insights into when, why, and where to use this library, and understand its architectural integration with Open Data Hub and Red Hat OpenShift AI platform. Discover various tuning techniques, including Low-rank adaptation (LoRA), prompt tuning, fine-tuning, and inference. Learn how to deploy and run production-ready LLM model tuning and inference on ODH. By the end of this talk, presented by James Busche and Kelly Abuelsaad, acquire a deeper understanding of LLM tuning complexities, benefits, and the open-source tools available to enhance your AI solutions.
Syllabus
Creating Your Own LLM Tuning Platform with Open Source Technologies - DevConf.US 2024
Taught by
DevConf
Related Courses
Big Self-Supervised Models Are Strong Semi-Supervised LearnersYannic Kilcher via YouTube A Transformer-Based Framework for Multivariate Time Series Representation Learning
Launchpad via YouTube Inside ChatGPT- Unveiling the Training Process of OpenAI's Language Model
Krish Naik via YouTube Fine Tune GPT-3.5 Turbo
Data Science Dojo via YouTube Yi 34B: The Rise of Powerful Mid-Sized Models - Base, 200k, and Chat
Sam Witteveen via YouTube