Gradient Surgery for Multi-Task Learning
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore a comprehensive analysis of gradient surgery for multi-task learning in this informative video. Delve into the challenges of multi-task learning, particularly when gradients of different tasks have significantly different magnitudes or conflicting directions. Learn about PCGrad, a method that projects conflicting gradients while maintaining optimality guarantees. Examine the three conditions in the multi-task optimization landscape that cause detrimental gradient interference and discover a general approach to avoid such interference. Investigate how this gradient surgery technique projects a task's gradient onto the normal plane of conflicting task gradients, leading to substantial improvements in efficiency and performance across challenging multi-task supervised and reinforcement learning problems. Understand the model-agnostic nature of this approach and its potential to enhance previously-proposed multi-task architectures.
Syllabus
Introduction
What is multitask learning
Example
Loss Function
Theorems
Conditions
Evil Try Effect
MultiTask Learning
Taught by
Yannic Kilcher
Related Courses
6.S094: Deep Learning for Self-Driving CarsMassachusetts Institute of Technology via Independent Natural Language Processing (NLP)
Microsoft via edX Deep Reinforcement Learning
Nvidia Deep Learning Institute via Udacity Advanced AI: Deep Reinforcement Learning in Python
Udemy Self-driving go-kart with Unity-ML
Udemy