YoVDO

Bandits - Kevin Jamieson - University of Washington

Offered By: Paul G. Allen School via YouTube

Tags

Machine Learning Courses Algorithm Design Courses Central Limit Theorem Courses Drug Discovery Courses Bandit Algorithms Courses Thompson Sampling Courses

Course Description

Overview

Explore the fundamentals of bandit algorithms in this comprehensive lecture from the University of Washington. Delve into the future of machine learning and discover how bandit algorithms are applied in various real-world scenarios, including drug development, Google Maps optimization, and content recommendation systems. Learn about stochastic models, Thompson sampling, and regret minimization techniques. Gain insights into key concepts such as sublinear regret, sub-Gaussian distributions, and the Central Limit Theorem. Enhance your understanding of this crucial area of machine learning and its practical applications in decision-making processes.

Syllabus

Introduction
The Future of Machine Learning
Bandits
Drug Makers
Google Maps
Content Recommendation
Stochastic Model
Thompson
Regret minimization
Regret
Sublinear Regret
Sub Gaussian
Central Limit Theorem


Taught by

Paul G. Allen School

Related Courses

Reinforcement Learning
Indian Institute of Technology Madras via Swayam
Bandit Algorithm (Online Machine Learning)
Indian Institute of Technology Bombay via Swayam
Reinforcement Learning
Edureka
Tracking Significant Changes in Bandit - IFDS 2022
Paul G. Allen School via YouTube
Bandits - Lecture 5
Paul G. Allen School via YouTube