Continuous Kendall Shape Variational Autoencoders
Offered By: Conference GSI via YouTube
Course Description
Overview
Explore an innovative approach to unsupervised learning of geometrically meaningful representations through equivariant variational autoencoders (VAEs) with hyperspherical latent representations in this 20-minute conference talk. Discover how the equivariant encoder/decoder ensures geometrically meaningful latents grounded in the input space, and learn about mapping these latents to hyperspheres for interpretation as points in a Kendall shape space. Examine the extension of the Kendall-shape VAE paradigm, providing a general definition of Kendall shapes in terms of group representations for more flexible KS-VAE modeling. Gain insights into how learning with generalized Kendall shapes, as opposed to landmark-based shapes, enhances representation capacity in this cutting-edge presentation from the Conference GSI.
Syllabus
Continuous Kendall Shape Variational Autoencoders
Taught by
Conference GSI
Related Courses
Graph Attention Networks - GNN Paper ExplainedAleksa Gordić - The AI Epiphany via YouTube Geometric Deep Learning for Drug Discovery
IEEE Signal Processing Society via YouTube Detection of Objects in Cryo-Electron Micrographs Using Geometric Deep Learning
Institute for Pure & Applied Mathematics (IPAM) via YouTube Physics-Inspired Learning on Graph - Michael Bronstein, PhD
Open Data Science via YouTube Inverse Problems on Graphs with Geometric Deep Learning
APS Physics via YouTube