YoVDO

How Many Labelled Examples Do You Need for a BERT-sized Model to Beat GPT4 on Predictive Tasks?

Offered By: MLOps World: Machine Learning in Production via YouTube

Tags

Machine Learning Courses MLOps Courses BERT Courses Few-shot Learning Courses GPT-4 Courses Predictive Modeling Courses Text Classification Courses In-context Learning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the comparison between Large Language Models (LLMs) and traditional machine learning approaches in this 28-minute conference talk from MLOps World: Machine Learning in Production. Delve into the concept of in-context learning, a new paradigm offered by LLMs, and its superiority over explicit labelled data methods for various generative tasks. Examine how in-context learning can be applied to predictive tasks like text categorization and entity recognition with minimal labelled examples. Learn from Matthew Honnibal, Founder and CTO of Explosion AI, as he discusses the number of labelled examples required for a BERT-sized model to outperform GPT-4 on predictive tasks, providing valuable insights into the evolving landscape of machine learning techniques.

Syllabus

How Many Labelled Examples Do You Need for a BERT-sized Model to Beat GPT4 on Predictive Tasks?


Taught by

MLOps World: Machine Learning in Production

Related Courses

Stanford Seminar - Enabling NLP, Machine Learning, and Few-Shot Learning Using Associative Processing
Stanford University via YouTube
GUI-Based Few Shot Classification Model Trainer - Demo
James Briggs via YouTube
HyperTransformer - Model Generation for Supervised and Semi-Supervised Few-Shot Learning
Yannic Kilcher via YouTube
GPT-3 - Language Models Are Few-Shot Learners
Yannic Kilcher via YouTube
IMAML- Meta-Learning with Implicit Gradients
Yannic Kilcher via YouTube