Bayesian Networks 3 - Maximum Likelihood - Stanford CS221: AI (Autumn 2019)
Offered By: Stanford University via YouTube
Course Description
Overview
          Learn about Bayesian networks and probabilistic inference in this Stanford University lecture from the CS221: AI course. Explore the origins of parameters, delve into various learning tasks, and examine examples including v-structures, inverted-v structures, and Naive Bayes. Understand parameter sharing, Hidden Markov Models (HMMs), and the general case learning algorithm. Discover maximum likelihood estimation, regularization techniques like Laplace smoothing, and the concept of maximum marginal likelihood. Conclude with an introduction to the Expectation Maximization (EM) algorithm, gaining valuable insights into advanced artificial intelligence concepts.
        
Syllabus
 Introduction.
 Announcements.
 Review: Bayesian network.
 Review: probabilistic inference.
 Where do parameters come from?.
 Roadmap.
 Learning task.
 Example: one variable.
 Example: v-structure.
 Example: inverted-v structure.
 Parameter sharing.
 Example: Naive Bayes.
 Example: HMMS.
 General case: learning algorithm.
 Maximum likelihood.
 Scenario 2.
 Regularization: Laplace smoothing.
 Example: two variables.
 Motivation.
 Maximum marginal likelihood.
 Expectation Maximization (EM).
Taught by
Stanford Online
Tags
Related Courses
Probabilistic Graphical Models 1: RepresentationStanford University via Coursera Probabilistic Graphical Models 3: Learning
Stanford University via Coursera Graphical Models Certification Training
Edureka Probabilistic Machine Learning
Eberhard Karls University of Tübingen via YouTube An Introduction to Artificial Intelligence
NPTEL via YouTube
