YoVDO

Where Does Negative Transfer Come From? On the Implicit Bias of SGD in Multi-Task Learning

Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube

Tags

Multi-Task Learning Courses Machine Learning Courses Neural Networks Courses Generalization Courses Stochastic Gradient Descent Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the complexities of negative transfer in multi-task learning through this 29-minute conference talk by David Mueller from the Center for Language & Speech Processing at Johns Hopkins University. Delve into the relationship between task conflict and negative transfer, discovering that negative outcomes can occur even without significant task conflicts. Examine the crucial role of optimization temperature in negative transfer, and learn how poorly chosen hyperparameters may be responsible for suboptimal performance rather than inherent task incompatibilities. Investigate the connection between these findings and the implicit bias of Stochastic Gradient Descent (SGD), which suggests a preference for solutions with high gradient coherence. Uncover the limitations of current explanations for negative transfer, including task conflict and implicit bias, and recognize the need for innovative multi-task optimization methods. Challenge conventional wisdom about neural network generalization and gain insights that could reshape approaches to multi-task learning optimization.

Syllabus

Where Does Negative Transfer Come From? On the Implicit Bias of SGD in Multi-Task Learning


Taught by

Center for Language & Speech Processing(CLSP), JHU

Related Courses

Deep Learning for Bioscientists
The University of Nottingham via FutureLearn
Structuring Machine Learning Projects
DeepLearning.AI via Coursera
머신 러닝 프로젝트 구조화
DeepLearning.AI via Coursera
Структурирование проектов по машинному обучению
DeepLearning.AI via Coursera
Improving Retrieval with RAG Fine-tuning
Pluralsight