Unsupervised Learning with Autoencoders
Offered By: MLCon | Machine Learning Conference via YouTube
Course Description
Overview
Explore the world of unsupervised learning through autoencoders in this 49-minute conference talk by Christoph Henkelmann at MLCon. Dive into the basic concept of autoencoders, various architectural variants, and their diverse applications. Learn about CNN autoencoders, the importance of the bottleneck layer, and practical use cases including anomaly detection, denoising, and similarity detection. Discover the power of generative autoencoders, intrinsic space exploration, and the advantages of variational autoencoders (VAE). Gain insights into real-world applications of unsupervised learning techniques in industry, and leave with a comprehensive understanding of this powerful machine learning approach.
Syllabus
Intro
UNSUPERVISED TRAINING
CNN AUTOENCODERS
THE BOTTLENECK
EVEN MORE EXAMPLES
ANOMALY DETECTION
USE CASE 1: ANOMALY EXAMPLES
DENOISING
PRETRAINING
SIMILARITY DETECTION
GENERATIVE AUTOENCODERS
INTRINSIC SPACE & DIMENSION
PAC MAN'S INTRINSIC SPACE
THE IDEAL PAC-MAN BOTTLENECK
BACK IN REALITY...
THE VARIATIONAL AUTOENCODER
ADVANTAGES OF THE VAE
UNSUPERVISED LEARNING AT DIVISIO
SUMMARY COMING UP IN OUR BLOG
Taught by
MLCon | Machine Learning Conference
Related Courses
Deep Learning – Part 2Indian Institute of Technology Madras via Swayam Image Compression and Generation using Variational Autoencoders in Python
Coursera Project Network via Coursera Probabilistic Deep Learning with TensorFlow 2
Imperial College London via Coursera Generative Models
Serrano.Academy via YouTube NVAE- A Deep Hierarchical Variational Autoencoder
Yannic Kilcher via YouTube