Bayesian Networks 2 - Forward-Backward - Stanford CS221: AI
Offered By: Stanford University via YouTube
Course Description
Overview
Learn about advanced concepts in Bayesian networks and probabilistic inference in this Stanford University lecture from the CS221: AI course. Explore hidden Markov models, lattice representations, and particle filtering techniques. Dive into topics such as beam search, object tracking, and Gibbs sampling. Gain a deeper understanding of forward-backward algorithms and their applications in artificial intelligence through comprehensive explanations and demonstrations.
Syllabus
Introduction.
Review: Bayesian network.
Review: probabilistic inference.
Hidden Markov model inference.
Lattice representation.
Summary.
Hidden Markov models.
Review: beam search.
Step 1: propose.
weight.
Step 3: resample.
Application: object tracking.
Particle filtering demo.
Roadmap.
Gibbs sampling.
Taught by
Stanford Online
Tags
Related Courses
Probabilistic Graphical Models 2: InferenceStanford University via Coursera Deep Learning – Part 2
Indian Institute of Technology Madras via Swayam Advanced Bayesian Methods
The National Centre for Research Methods via YouTube Natural Language Processing
Serrano.Academy via YouTube Latent Dirichlet Allocation with Gibbs Sampling Explained
Aladdin Persson via YouTube