YoVDO

Parameter Estimation and Interpretability in Bayesian Mixture Models

Offered By: VinAI via YouTube

Tags

Bayesian Statistics Courses Machine Learning Courses Signal Processing Courses Parameter Estimation Courses Model Interpretability Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intricacies of parameter estimation and interpretability in Bayesian mixture models through this comprehensive seminar series. Delve into the research of Long Nguyen, an associate professor at the University of Michigan, as he examines posterior contraction behaviors for parameters in Bayesian mixture modeling. Investigate two types of prior specification: one with an explicit prior distribution on the number of mixture components, and another placing a nonparametric prior on the space of mixing distributions. Learn how these approaches yield optimal rates of posterior contraction and consistently recover unknown numbers of mixture components. Analyze the impact of model misspecification on posterior contraction rates, with a focus on the crucial role of kernel density function choices. Gain insights into the tradeoffs between model expressiveness and interpretability in mixture modeling, equipping yourself with valuable knowledge for statistical modeling in various applications.

Syllabus

Seminar Series: Parameter Estimation & Interpretability in Bayesian Mixture Models


Taught by

VinAI

Related Courses

Bayesian Statistics
Duke University via Coursera
Bayesian Methods for Machine Learning
Higher School of Economics via Coursera
Bayesian Optimization with Python
Coursera Project Network via Coursera
Bayesian Statistics: From Concept to Data Analysis
University of California, Santa Cruz via Coursera
Bayesian Statistics
University of California, Santa Cruz via Coursera