YoVDO

Fine-Tuning Large Language Models Faster Using Bonito for Task-Specific Training Data Generation

Offered By: Snorkel AI via YouTube

Tags

Fine-Tuning Courses Artificial Intelligence Courses Machine Learning Courses Data Augmentation Courses Instruction-Tuning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Discover how Bonito, a novel open-source model, revolutionizes the fine-tuning process for large language models in this 47-minute research talk. Explore the potential of generating task-specific training datasets for instruction tuning, enabling faster adaptation of LLMs to specialized tasks. Join Nihal V. Nayak, a Ph.D. student from Brown University's Department of Computer Science, as he delves into Bonito's capabilities for improving zero-shot task adaptation on private data. Learn how to accelerate the creation of instruction-tuning datasets, identify optimal use cases for the model, and understand the role of existing datasets in enhancing Bonito's effectiveness. Gain valuable insights into this cutting-edge approach that could significantly impact both research and enterprise applications in the field of AI and large language models.

Syllabus

Fine-tune large language models faster using Bonito to generate task-specific training data!


Taught by

Snorkel AI

Related Courses

TensorFlow を使った畳み込みニューラルネットワーク
DeepLearning.AI via Coursera
Emotion AI: Facial Key-points Detection
Coursera Project Network via Coursera
Transfer Learning for Food Classification
Coursera Project Network via Coursera
Facial Expression Classification Using Residual Neural Nets
Coursera Project Network via Coursera
Apply Generative Adversarial Networks (GANs)
DeepLearning.AI via Coursera