YoVDO

Parameter Estimation and Interpretability in Bayesian Mixture Models

Offered By: VinAI via YouTube

Tags

Bayesian Statistics Courses Machine Learning Courses Signal Processing Courses Parameter Estimation Courses Model Interpretability Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intricacies of parameter estimation and interpretability in Bayesian mixture models through this comprehensive seminar series. Delve into the research of Long Nguyen, an associate professor at the University of Michigan, as he examines posterior contraction behaviors for parameters in Bayesian mixture modeling. Investigate two types of prior specification: one with an explicit prior distribution on the number of mixture components, and another placing a nonparametric prior on the space of mixing distributions. Learn how these approaches yield optimal rates of posterior contraction and consistently recover unknown numbers of mixture components. Analyze the impact of model misspecification on posterior contraction rates, with a focus on the crucial role of kernel density function choices. Gain insights into the tradeoffs between model expressiveness and interpretability in mixture modeling, equipping yourself with valuable knowledge for statistical modeling in various applications.

Syllabus

Seminar Series: Parameter Estimation & Interpretability in Bayesian Mixture Models


Taught by

VinAI

Related Courses

Discrete Inference and Learning in Artificial Vision
École Centrale Paris via Coursera
Observation Theory: Estimating the Unknown
Delft University of Technology via edX
Computational Probability and Inference
Massachusetts Institute of Technology via edX
Probabilistic Graphical Models 3: Learning
Stanford University via Coursera
Applied Time-Series Analysis
Indian Institute of Technology Madras via Swayam