YoVDO

When MOE Meets LLMs: Parameter Efficient Fine-tuning for Multi-task Medical Applications - Lecture 1

Offered By: Association for Computing Machinery (ACM) via YouTube

Tags

Parameter-Efficient Fine-Tuning Courses Machine Learning Courses Multi-Task Learning Courses Mixture-of-Experts Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a cutting-edge conference talk on the intersection of Mixture of Experts (MOE) and Large Language Models (LLMs) for multi-task medical applications. Delve into parameter-efficient fine-tuning techniques presented by authors Qidong Liu, Xian Wu, Xiangyu Zhao, Yuanshao Zhu, Derong Xu, Feng Tian, and Yefeng Zheng. Gain insights into how these advanced AI methodologies are being applied to improve efficiency and performance in various medical tasks. Learn about the potential impact of this research on the future of healthcare technology and AI-assisted medical decision-making.

Syllabus

SIGIR 2024 T1.2 [fp] When MOE Meets LLMs: Parameter Efficient Fine-tuning for Multi-task Medical App


Taught by

Association for Computing Machinery (ACM)

Related Courses

Generative AI Engineering and Fine-Tuning Transformers
IBM via Coursera
Lessons From Fine-Tuning Llama-2
Anyscale via YouTube
The Next Million AI Apps - Developing Custom Models for Specialized Tasks
MLOps.community via YouTube
LLM Fine-Tuning - Explained
CodeEmporium via YouTube
Fine-tuning Large Models on Local Hardware Using PEFT and Quantization
EuroPython Conference via YouTube