The Future of Natural Language Processing
Offered By: HuggingFace via YouTube
Course Description
Overview
Explore the future of Natural Language Processing in this comprehensive 1-hour lecture by Thomas Wolf, Science lead at HuggingFace. Delve into transfer learning, examining open questions, current trends, limits, and future directions. Gain insights from a curated selection of late 2019/early-2020 research papers focusing on model size and computational efficiency, out-of-domain generalization, model evaluation, fine-tuning, sample efficiency, common sense, and inductive biases. Analyze the impact of increasing data and model sizes, compare in-domain vs. out-of-domain generalization, and investigate solutions to robustness issues in NLP. Discuss the rise of Natural Language Generation (NLG) and its implications for the field. Address critical questions surrounding inductive bias and common sense in AI language models. Access accompanying slides for visual support and follow HuggingFace and Thomas Wolf on Twitter for ongoing updates in the rapidly evolving world of NLP.
Syllabus
Intro
Open questions, current trends, limits
Model size and Computational efficiency
Using more and more data
Pretraining on more data
Fine-tuning on more data
More data or better models
In-domain vs. out-of-domain generalization
The limits of NLU and the rise of NLG
Solutions to the lack of robustness
Reporting and evaluation issues
The inductive bias question
The common sense question
Taught by
Hugging Face
Related Courses
Macroeconometric ForecastingInternational Monetary Fund via edX Machine Learning With Big Data
University of California, San Diego via Coursera Data Science at Scale - Capstone Project
University of Washington via Coursera Structural Equation Model and its Applications | 结构方程模型及其应用 (粤语)
The Chinese University of Hong Kong via Coursera Data Science in Action - Building a Predictive Churn Model
SAP Learning