Supercharging LLM Performance Without AI Training
Offered By: Snorkel AI via YouTube
Course Description
Overview
Discover an innovative approach to enhancing large language model performance without additional AI training in this 20-minute conference talk from Snorkel AI's Foundation Model Virtual Summit 2023. Explore the methodology developed by Simran Arora, ML researcher at Stanford University, and her collaborators to boost LLM capabilities. Gain insights into cutting-edge techniques that optimize model performance without the need for extensive retraining. Delve into practical strategies for improving AI systems and learn how to leverage existing models more effectively. Perfect for AI researchers, developers, and enthusiasts looking to maximize the potential of large language models in their projects.
Syllabus
Supercharge Your LLM Performance (Without AI Training)
Taught by
Snorkel AI
Related Courses
Solving the Last Mile Problem of Foundation Models with Data-Centric AIMLOps.community via YouTube Foundational Models in Enterprise AI - Challenges and Opportunities
MLOps.community via YouTube Knowledge Distillation Demystified: Techniques and Applications
Snorkel AI via YouTube Model Distillation - From Large Models to Efficient Enterprise Solutions
Snorkel AI via YouTube Curate Training Data via Labeling Functions - 10 to 100x Faster
Snorkel AI via YouTube