YoVDO

Bayesian Networks 3 - Maximum Likelihood - Stanford CS221: AI (Autumn 2019)

Offered By: Stanford University via YouTube

Tags

Statistics & Probability Courses Artificial Intelligence Courses Bayesian Networks Courses Naive Bayes Courses Maximum Likelihood Estimation Courses

Course Description

Overview

Learn about Bayesian networks and probabilistic inference in this Stanford University lecture from the CS221: AI course. Explore the origins of parameters, delve into various learning tasks, and examine examples including v-structures, inverted-v structures, and Naive Bayes. Understand parameter sharing, Hidden Markov Models (HMMs), and the general case learning algorithm. Discover maximum likelihood estimation, regularization techniques like Laplace smoothing, and the concept of maximum marginal likelihood. Conclude with an introduction to the Expectation Maximization (EM) algorithm, gaining valuable insights into advanced artificial intelligence concepts.

Syllabus

Introduction.
Announcements.
Review: Bayesian network.
Review: probabilistic inference.
Where do parameters come from?.
Roadmap.
Learning task.
Example: one variable.
Example: v-structure.
Example: inverted-v structure.
Parameter sharing.
Example: Naive Bayes.
Example: HMMS.
General case: learning algorithm.
Maximum likelihood.
Scenario 2.
Regularization: Laplace smoothing.
Example: two variables.
Motivation.
Maximum marginal likelihood.
Expectation Maximization (EM).


Taught by

Stanford Online

Tags

Related Courses

Statistical Inference and Modeling for High-throughput Experiments
Harvard University via edX
Estimation for Wireless Communications –MIMO/ OFDM Cellular and Sensor Networks
Indian Institute of Technology Kanpur via Swayam
Обобщенные линейные модели
Saint Petersburg State University via Coursera
Введение в теорию построения процедур множественной проверки гипотез
Higher School of Economics via Coursera
Bayesian Statistics: Mixture Models
University of California, Santa Cruz via Coursera