YoVDO

Gradient Surgery for Multi-Task Learning

Offered By: Yannic Kilcher via YouTube

Tags

Multi-Task Learning Courses Deep Learning Courses Deep Reinforcement Learning Courses

Course Description

Overview

Explore a comprehensive analysis of gradient surgery for multi-task learning in this informative video. Delve into the challenges of multi-task learning, particularly when gradients of different tasks have significantly different magnitudes or conflicting directions. Learn about PCGrad, a method that projects conflicting gradients while maintaining optimality guarantees. Examine the three conditions in the multi-task optimization landscape that cause detrimental gradient interference and discover a general approach to avoid such interference. Investigate how this gradient surgery technique projects a task's gradient onto the normal plane of conflicting task gradients, leading to substantial improvements in efficiency and performance across challenging multi-task supervised and reinforcement learning problems. Understand the model-agnostic nature of this approach and its potential to enhance previously-proposed multi-task architectures.

Syllabus

Introduction
What is multitask learning
Example
Loss Function
Theorems
Conditions
Evil Try Effect
MultiTask Learning


Taught by

Yannic Kilcher

Related Courses

Build your first Self Driving Car using AWS DeepRacer
Coursera Project Network via Coursera
Fundamentals of Deep Reinforcement Learning
Learn Ventures via edX
Deep Reinforcement Learning in Python
DataCamp
Natural Language Processing (NLP)
Microsoft via edX
Reinforcement Learning Course: Intro to Advanced Actor Critic Methods
freeCodeCamp