Exploiting Sparsity and Structure in Parametric and Nonparametric Estimation - 2007
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Explore new findings on sparse estimation in parametric graphical models and nonparametric regression in high dimensions in this 1-hour 5-minute lecture by John Lafferty from Carnegie Mellon University. Delve into l1 regularization techniques for estimating graph structures in high-dimensional settings and discover a novel nonparametric lasso method that regularizes estimator derivatives. Examine the challenges of semi-supervised learning and how unlabeled data can potentially enhance estimation. Analyze current regularization methods through the lens of minimax theory and learn about new approaches that yield improved convergence rates. Gain insights from Lafferty's extensive background in machine learning, statistical learning theory, computational statistics, and natural language processing as he presents joint work with collaborators in this Center for Language & Speech Processing talk at Johns Hopkins University.
Syllabus
Exploiting Sparsity and Structure in Parametric and Nonparametric Estimation – John Lafferty - 2007
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
Introduction to Bayesian StatisticsDatabricks via Coursera Probability via Computation - Week 4
The Julia Programming Language via YouTube On the Unreasonable Effectiveness of Compressive Imaging - Ben Adcock, Simon Fraser University
Alan Turing Institute via YouTube The Mother of All Representer Theorems for Inverse Problems and Machine Learning - Michael Unser
Alan Turing Institute via YouTube Affine Spline Insights into Deep Learning - Richard Baraniuk, Rice University
Alan Turing Institute via YouTube