The Future is Fine-Tuned - Deploying Task-Specific LLMs
Offered By: Linux Foundation via YouTube
Course Description
Overview
Explore the advantages of deploying task-specific AI models in this 39-minute conference talk by Devvret Rishi from Predibase. Delve into the growing trend of organizations opting for specialized, fine-tuned LLMs over large, generalized models like ChatGPT. Learn about the cost-effectiveness and improved latency of smaller, task-specific AI solutions. Discover the declarative ML framework developed with Ludwig at Predibase, designed to simplify AI model building for engineers. Gain insights into the motivations behind this approach and the technical details of fine-tuning popular open-source LLMs like Llama2. Understand how to deploy these models in a cost-effective manner using the innovative LoRAX technique, enabling organizations to create tailored AI solutions for their specific needs.
Syllabus
The Future is Fine-Tuned: Deploying Task-specific LLMs - Devvret Rishi, Predibase
Taught by
Linux Foundation
Tags
Related Courses
Developing a Tabular Data ModelMicrosoft via edX Data Science in Action - Building a Predictive Churn Model
SAP Learning Serverless Machine Learning with Tensorflow on Google Cloud Platform 日本語版
Google Cloud via Coursera Intro to TensorFlow em Português Brasileiro
Google Cloud via Coursera Serverless Machine Learning con TensorFlow en GCP
Google Cloud via Coursera