YoVDO

Fine-Tuning Large Language Models Faster Using Bonito for Task-Specific Training Data Generation

Offered By: Snorkel AI via YouTube

Tags

Fine-Tuning Courses Artificial Intelligence Courses Machine Learning Courses Data Augmentation Courses Instruction-Tuning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Discover how Bonito, a novel open-source model, revolutionizes the fine-tuning process for large language models in this 47-minute research talk. Explore the potential of generating task-specific training datasets for instruction tuning, enabling faster adaptation of LLMs to specialized tasks. Join Nihal V. Nayak, a Ph.D. student from Brown University's Department of Computer Science, as he delves into Bonito's capabilities for improving zero-shot task adaptation on private data. Learn how to accelerate the creation of instruction-tuning datasets, identify optimal use cases for the model, and understand the role of existing datasets in enhancing Bonito's effectiveness. Gain valuable insights into this cutting-edge approach that could significantly impact both research and enterprise applications in the field of AI and large language models.

Syllabus

Fine-tune large language models faster using Bonito to generate task-specific training data!


Taught by

Snorkel AI

Related Courses

TensorFlow: Working with NLP
LinkedIn Learning
Introduction to Video Editing - Video Editing Tutorials
Great Learning via YouTube
HuggingFace Crash Course - Sentiment Analysis, Model Hub, Fine Tuning
Python Engineer via YouTube
GPT3 and Finetuning the Core Objective Functions - A Deep Dive
David Shapiro ~ AI via YouTube
How to Build a Q&A AI in Python - Open-Domain Question-Answering
James Briggs via YouTube