SleepFM - Multi-modal Representation Learning for Sleep Across Brain Activity, ECG and Respiratory Signals
Offered By: Stanford University via YouTube
Course Description
Overview
Explore a comprehensive lecture on SleepFM, a groundbreaking multi-modal foundation model for sleep analysis. Delve into the development of this innovative approach that leverages a large polysomnography dataset from over 14,000 participants, comprising more than 100,000 hours of multi-modal sleep recordings. Learn about the novel leave-one-out approach for contrastive learning and its significant improvements in downstream task performance compared to standard pairwise contrastive learning methods. Discover how SleepFM's learned embeddings outperform end-to-end trained convolutional neural networks in sleep stage classification and sleep disordered breathing detection. Gain insights into the model's ability to retrieve corresponding recording clips of other modalities with impressive accuracy. Understand the value of holistic multi-modal sleep modeling in capturing the full complexity of sleep recordings and its potential implications for sleep research and clinical applications.
Syllabus
MedAI #124: SleepFM: Multi-modal Representation Learning for Sleep | Rahul Thapa
Taught by
Stanford MedAI
Tags
Related Courses
Stanford Seminar - Audio Research: Transformers for Applications in Audio, Speech and MusicStanford University via YouTube How to Represent Part-Whole Hierarchies in a Neural Network - Geoff Hinton's Paper Explained
Yannic Kilcher via YouTube OpenAI CLIP - Connecting Text and Images - Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube Learning Compact Representation with Less Labeled Data from Sensors
tinyML via YouTube Human Activity Recognition - Learning with Less Labels and Privacy Preservation
University of Central Florida via YouTube