Composable Interventions for Language Models
Offered By: USC Information Sciences Institute via YouTube
Course Description
Overview
Explore a comprehensive lecture on composable interventions for language models presented by Arinbjörn Kolbeinsson at the USC Information Sciences Institute. Delve into the world of test-time interventions that enhance factual accuracy, mitigate harmful outputs, and improve model efficiency without costly retraining. Discover a new framework for studying the effects of using multiple interventions on the same language models, featuring innovative metrics and a unified codebase. Examine extensive experiments composing popular methods from Knowledge Editing, Model Compression, and Machine Unlearning categories. Uncover meaningful interactions between interventions, including how compression affects editing and unlearning, the importance of application order, and the inadequacy of general-purpose metrics for assessing composability. Gain insights into clear gaps in composability and the need for new multi-objective interventions. Access the public codebase to further explore the concepts presented. Learn from Arinbjörn Kolbeinsson's expertise in responsible and accurate models for health and biomedicine, as well as his background in machine learning and biostatistics.
Syllabus
Composable Interventions for Language Models
Taught by
USC Information Sciences Institute
Related Courses
TensorFlow Lite for Edge Devices - TutorialfreeCodeCamp Few-Shot Learning in Production
HuggingFace via YouTube TinyML Talks Germany - Neural Network Framework Using Emerging Technologies for Screening Diabetic
tinyML via YouTube TinyML for All: Full-stack Optimization for Diverse Edge AI Platforms
tinyML via YouTube TinyML Talks - Software-Hardware Co-design for Tiny AI Systems
tinyML via YouTube