Multiview and Self-Supervised Representation Learning - Nonlinear Mixture Identification
Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube
Course Description
Overview
Explore a 48-minute lecture on multiview and self-supervised representation learning from a nonlinear mixture identification perspective. Delve into the insights presented by Xiao Fu of Oregon State University at IPAM's Explainable AI for the Sciences workshop. Examine the central concept of representation learning and its importance in preventing overfitting and enhancing domain adaptation and transfer learning. Investigate two representation learning paradigms using multiple views of data, including naturally acquired and artificially produced multiview data. Analyze the effectiveness of multiview analysis tools like deep canonical correlation analysis and self-supervised learning paradigms such as BYOL and Barlow Twins. Discover an intuitive generative model of multiview data and learn how latent correlation maximization guarantees the extraction of shared components across views. Explore methods for disentangling private information from shared components and understand the implications for cross-view translation and data generation. Gain insights from a finite sample analysis in nonlinear mixture identifiability study and examine the practical applications of theoretical results and newly designed regularization techniques.
Syllabus
Xiao Fu - Multiview and Self-Supervised Representation Learning: Nonlinear Mixture Identification
Taught by
Institute for Pure & Applied Mathematics (IPAM)
Related Courses
Visual Recognition & UnderstandingUniversity at Buffalo via Coursera Deep Learning for Computer Vision
IIT Hyderabad via Swayam Deep Learning in Life Sciences - Spring 2021
Massachusetts Institute of Technology via YouTube Advanced Deep Learning Methods for Healthcare
University of Illinois at Urbana-Champaign via Coursera Generative Models
Serrano.Academy via YouTube