YoVDO

Sparsifying Generalized Linear Models

Offered By: Simons Institute via YouTube

Tags

Generalized Linear Models Courses Algorithms Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore sparsification techniques for generalized linear models in this 56-minute lecture by Yang Liu from Stanford University. Delve into the mathematical foundations of sparsifying sums F: R^n → R_+ where F(x) = f_1(<a_1,x>) + ... + f_m(<a_m,x>). Learn about the existence of (1+ε)-approximate sparsifiers with support size n/ε^2 (log n/ε)^O(1) for symmetric, monotone functions satisfying natural growth bounds. Discover efficient algorithms for computing such sparsifiers and their applications in optimizing various generalized linear models, including ℓ_p regression. Gain insights into near-optimal reductions for high-accuracy optimization through solving sparse regression instances. Understand the implications of this work, which generalizes classic ℓ_p sparsification and provides the first near-linear size sparsifiers for Huber loss function and its generalizations.

Syllabus

Sparsifying Generalized Linear Models


Taught by

Simons Institute

Related Courses

Principles of fMRI 1
Johns Hopkins University via Coursera
FA19: Statistical Modeling and Regression Analysis
Georgia Institute of Technology via edX
Обобщенные линейные модели
Saint Petersburg State University via Coursera
Regression Analysis
Indian Institute of Science Education and Research, Pune via Swayam
Multiple and Logistic Regression in R
DataCamp