When MOE Meets LLMs: Parameter Efficient Fine-tuning for Multi-task Medical Applications - Lecture 1
Offered By: Association for Computing Machinery (ACM) via YouTube
Course Description
Overview
Explore a cutting-edge conference talk on the intersection of Mixture of Experts (MOE) and Large Language Models (LLMs) for multi-task medical applications. Delve into parameter-efficient fine-tuning techniques presented by authors Qidong Liu, Xian Wu, Xiangyu Zhao, Yuanshao Zhu, Derong Xu, Feng Tian, and Yefeng Zheng. Gain insights into how these advanced AI methodologies are being applied to improve efficiency and performance in various medical tasks. Learn about the potential impact of this research on the future of healthcare technology and AI-assisted medical decision-making.
Syllabus
SIGIR 2024 T1.2 [fp] When MOE Meets LLMs: Parameter Efficient Fine-tuning for Multi-task Medical App
Taught by
Association for Computing Machinery (ACM)
Related Courses
Structuring Machine Learning ProjectsDeepLearning.AI via Coursera Структурирование проектов по машинному обучению
DeepLearning.AI via Coursera 머신 러닝 프로젝트 구조화
DeepLearning.AI via Coursera Stanford CS330: Deep Multi-Task and Meta Learning
Stanford University via YouTube Stanford Seminar - The Next Generation of Robot Learning
Stanford University via YouTube