Fine-Tuning Large Language Models Faster Using Bonito for Task-Specific Training Data Generation
Offered By: Snorkel AI via YouTube
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Discover how Bonito, a novel open-source model, revolutionizes the fine-tuning process for large language models in this 47-minute research talk. Explore the potential of generating task-specific training datasets for instruction tuning, enabling faster adaptation of LLMs to specialized tasks. Join Nihal V. Nayak, a Ph.D. student from Brown University's Department of Computer Science, as he delves into Bonito's capabilities for improving zero-shot task adaptation on private data. Learn how to accelerate the creation of instruction-tuning datasets, identify optimal use cases for the model, and understand the role of existing datasets in enhancing Bonito's effectiveness. Gain valuable insights into this cutting-edge approach that could significantly impact both research and enterprise applications in the field of AI and large language models.
Syllabus
Fine-tune large language models faster using Bonito to generate task-specific training data!
Taught by
Snorkel AI
Related Courses
Business Considerations for 5G with Edge, IoT, and AILinux Foundation via edX FinTech for Finance and Business Leaders
ACCA via edX AI-900: Microsoft Certified Azure AI Fundamentals
A Cloud Guru AWS Certified Machine Learning - Specialty (LA)
A Cloud Guru Azure AI Components and Services
A Cloud Guru