PCA, AE, K-Means, Gaussian Mixture Model, Sparse Coding, and Intuitive VAE
Offered By: Alfredo Canziani via YouTube
Course Description
Overview
Explore advanced machine learning techniques in this comprehensive lecture by Yann LeCun. Dive into Principal Component Analysis (PCA), Auto-encoders, K-means clustering, Gaussian mixture models, sparse coding, and Variational Autoencoders (VAE). Learn about training methods, architectural approaches, and regularized Energy-Based Models (EBM). Gain insights into unconditional regularized latent variable EBMs, amortized inference, convolutional sparse coding, and video prediction. Benefit from in-depth Q&A sessions on labels, supervised learning, norms, and posterior distributions. Enhance your understanding with practical examples using MNIST and natural patches, and explore intuitive interpretations of VAEs.
Syllabus
– Welcome to class
– Training methods revisited
– Architectural methods
– 1. PCA
– Q&A on Definitions: Labels, unconditional, and un, selfsupervised learning
– 2. Auto-encoder with Bottleneck
– 3. K-Means
– 4. Gaussian mixture model
– Regularized EBM
– Yann out of context
– Q&A on Norms and Posterior: when the student is thinking too far ahead
– 1. Unconditional regularized latent variable EBM: Sparse coding
– Sparse modeling on MNIST & natural patches
– 2. Amortized inference
– ISTA algorithm & RNN Encoder
– 3. Convolutional sparce coding
– 4. Video prediction: very briefly
– 5. VAE: an intuitive interpretation
– Helpful whiteboard stuff
– Another interpretation
Taught by
Alfredo Canziani
Tags
Related Courses
Predictive Analytics: Gaining Insights from Big DataQueensland University of Technology via FutureLearn Cluster Analysis
University of Texas Arlington via edX Aprendizaje de máquinas
Universidad Nacional Autónoma de México via Coursera Foundations of Data Science: K-Means Clustering in Python
University of London International Programmes via Coursera Image Compression with K-Means Clustering
Coursera Project Network via Coursera