How Many Labelled Examples Do You Need for a BERT-sized Model to Beat GPT4 on Predictive Tasks?
Offered By: MLOps World: Machine Learning in Production via YouTube
Course Description
Overview
Explore the comparison between Large Language Models (LLMs) and traditional machine learning approaches in this 28-minute conference talk from MLOps World: Machine Learning in Production. Delve into the concept of in-context learning, a new paradigm offered by LLMs, and its superiority over explicit labelled data methods for various generative tasks. Examine how in-context learning can be applied to predictive tasks like text categorization and entity recognition with minimal labelled examples. Learn from Matthew Honnibal, Founder and CTO of Explosion AI, as he discusses the number of labelled examples required for a BERT-sized model to outperform GPT-4 on predictive tasks, providing valuable insights into the evolving landscape of machine learning techniques.
Syllabus
How Many Labelled Examples Do You Need for a BERT-sized Model to Beat GPT4 on Predictive Tasks?
Taught by
MLOps World: Machine Learning in Production
Related Courses
Sentiment Analysis with Deep Learning using BERTCoursera Project Network via Coursera Natural Language Processing with Attention Models
DeepLearning.AI via Coursera Fine Tune BERT for Text Classification with TensorFlow
Coursera Project Network via Coursera Deploy a BERT question answering bot on Django
Coursera Project Network via Coursera Generating discrete sequences: language and music
Ural Federal University via edX