Do We Still Need Clinical Large Language Models?
Offered By: Toronto Machine Learning Series (TMLS) via YouTube
Course Description
Overview
Explore the utility of specialized clinical language models in a 27-minute conference talk from the Toronto Machine Learning Series. Delve into an extensive empirical analysis comparing 12 language models of varying sizes on three clinical tasks. Discover how smaller domain-specific models outperform larger general-purpose language models in highly specialized, safety-critical medical contexts. Learn about the benefits of pretraining on clinical tokens for creating more parameter-efficient models. Gain insights from Alistar Johnson, a Scientist at SickKids, on the potential advantages of focused clinical models over general large language models in healthcare applications.
Syllabus
Do We Still Need Clinical Large Language Models
Taught by
Toronto Machine Learning Series (TMLS)
Related Courses
CMU Advanced NLP: How to Use Pre-Trained ModelsGraham Neubig via YouTube Stanford Seminar 2022 - Transformer Circuits, Induction Heads, In-Context Learning
Stanford University via YouTube Pretraining Task Diversity and the Emergence of Non-Bayesian In-Context Learning for Regression
Simons Institute via YouTube In-Context Learning: A Case Study of Simple Function Classes
Simons Institute via YouTube AI Mastery: Ultimate Crash Course in Prompt Engineering for Large Language Models
Data Science Dojo via YouTube