Fine-Tuning Large Language Models Faster Using Bonito for Task-Specific Training Data Generation
Offered By: Snorkel AI via YouTube
Course Description
Overview
Discover how Bonito, a novel open-source model, revolutionizes the fine-tuning process for large language models in this 47-minute research talk. Explore the potential of generating task-specific training datasets for instruction tuning, enabling faster adaptation of LLMs to specialized tasks. Join Nihal V. Nayak, a Ph.D. student from Brown University's Department of Computer Science, as he delves into Bonito's capabilities for improving zero-shot task adaptation on private data. Learn how to accelerate the creation of instruction-tuning datasets, identify optimal use cases for the model, and understand the role of existing datasets in enhancing Bonito's effectiveness. Gain valuable insights into this cutting-edge approach that could significantly impact both research and enterprise applications in the field of AI and large language models.
Syllabus
Fine-tune large language models faster using Bonito to generate task-specific training data!
Taught by
Snorkel AI
Related Courses
Towards Reliable Use of Large Language Models - Better Detection, Consistency, and Instruction-TuningSimons Institute via YouTube Role of Instruction-Tuning and Prompt Engineering in Clinical Domain - MedAI 125
Stanford University via YouTube Generative AI Advance Fine-Tuning for LLMs
IBM via Coursera SeaLLMs - Large Language Models for Southeast Asia
VinAI via YouTube Fine-tuning LLMs with Hugging Face SFT and QLoRA - LLMOps Techniques
LLMOps Space via YouTube