Fast Neural Kernel Embeddings for General Activations
Offered By: Google TechTalks via YouTube
Course Description
Overview
Explore the intricacies of neural kernel embeddings for general activations in this 34-minute Google TechTalk presented by Insu Han. Delve into the world of infinite width limits and their connections between neural networks and kernel methods. Discover how to overcome the limitations of kernel methods in large-scale learning settings, including their quadratic runtime and memory complexities. Learn about methods to work with general activations beyond the commonly analyzed ReLU, including exact dual activation expressions and effective approximation techniques. Examine a fast sketching method for approximating multi-layered Neural Network Gaussian Process (NNGP) kernel and Neural Tangent Kernel (NTK) matrices using truncated Hermite expansion. Understand the advantages of these methods, which are applicable to any dataset of points in ℝd, without the limitation of data points on the unit sphere. Explore subspace embedding for NNGP and NTK matrices with near input-sparsity runtime and near-optimal target dimension for homogeneous dual activation functions. Gain insights into the empirical results, showcasing a 106× speedup for approximate Convolutional Neural Tangent Kernel (CNTK) computation of a 5-layer Myrtle network on the CIFAR-10 dataset.
Syllabus
Fast Neural Kernel Embeddings for General Activations
Taught by
Google TechTalks
Related Courses
機器學習技法 (Machine Learning Techniques)National Taiwan University via Coursera Utilisez des modèles supervisés non linéaires
CentraleSupélec via OpenClassrooms Statistical Machine Learning
Eberhard Karls University of Tübingen via YouTube Interplay of Linear Algebra, Machine Learning, and HPC - JuliaCon 2021 Keynote
The Julia Programming Language via YouTube Interpolation and Learning With Scale Dependent Kernels
MITCBMM via YouTube