Retraining vs RAG vs Context - Using Local Data on Large Language Models
Offered By: Dave's Garage via YouTube
Course Description
Overview
Explore the methods of expanding large language model functionality through retraining, retrieval augmented generation (RAG), and context documents in this informative video. Learn how these techniques can be applied to both local and online models to enhance their capabilities. Gain insights into the differences between these approaches and understand their practical applications in leveraging local data with LLMs. Discover how these methods can be used to customize and improve AI models for specific tasks or domains.
Syllabus
Retraining vs RAG vs Context: Your Local Data on LLMs!
Taught by
Dave's Garage
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Artificial Intelligence for Robotics
Stanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent