Modular and Composable Transfer Learning
Offered By: USC Information Sciences Institute via YouTube
Course Description
Overview
Explore modular and composable transfer learning strategies in this informative lecture presented by Jonas Pfeiffer at USC Information Sciences Institute. Delve into adapter-based fine-tuning techniques for parameter-efficient transfer learning with large pre-trained transformer models. Discover how small neural network components introduced at each layer can encapsulate downstream task information while keeping pre-trained parameters frozen. Learn about the modularity and composability of adapters for improving target task performance and achieving zero-shot cross-lingual transfer. Examine the benefits of adding modularity during pre-training to mitigate catastrophic interference and address challenges in multilingual models. Gain insights from Pfeiffer's extensive research experience in modular representation learning across multi-task, multilingual, and multi-modal contexts.
Syllabus
Modular and Composable Transfer Learning
Taught by
USC Information Sciences Institute
Related Courses
Generative AI Engineering and Fine-Tuning TransformersIBM via Coursera Lessons From Fine-Tuning Llama-2
Anyscale via YouTube The Next Million AI Apps - Developing Custom Models for Specialized Tasks
MLOps.community via YouTube LLM Fine-Tuning - Explained
CodeEmporium via YouTube Fine-tuning Large Models on Local Hardware Using PEFT and Quantization
EuroPython Conference via YouTube