Automated Scalable Bayesian Inference via Data Summarization - 2018
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore advanced techniques for scalable Bayesian inference in large-scale data settings through this 59-minute lecture by MIT's Tamara Broderick. Delve into the concept of data summarization and coresets as a means to overcome computational challenges in Bayesian methods. Learn about theoretical guarantees on coreset size and approximation quality, and discover how this approach provides geometric decay in posterior approximation error. Examine the application of these techniques to both synthetic and real datasets, demonstrating significant improvements over uniform random subsampling. Gain insights into Broderick's research on developing and analyzing models for scalable Bayesian machine learning, and understand the potential impact of these methods on handling large datasets efficiently while maintaining the benefits of Bayesian inference.
Syllabus
Core" of the data set • Observe: redundancies can exist even if data isn't "tall
Roadmap
Bayesian coresets
Uniform subsampling revisited
Importance sampling
Hilbert coresets
Frank-Wolfe
Gaussian model (simulated) • 1K pts; norms, inference: closed-form
Logistic regression (simulated)
Real data experiments
Data summarization
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
How to Use Pivot Tables to Analyse Data in ExcelFutureLearn Complex Retrieval Queries in MySQL Workbench
Coursera Project Network via Coursera Data Analysis Using Python
University of Pennsylvania via Coursera Introduction to Data Visualization using Google Data Studio
Coursera Project Network via Coursera Data Preparation in Alteryx
DataCamp