YoVDO

Continuous-in-time Limit for Bandits

Offered By: USC Probability and Statistics Seminar via YouTube

Tags

Multi-Armed Bandits Courses Statistics & Probability Courses Algorithm Design Courses Probability Theory Courses Decision Theory Courses Sequential Decision Making Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the connection between Hamilton-Jacobi-Bellman equations and multi-armed bandit (MAB) problems in this 44-minute seminar talk from the USC Probability and Statistics Seminar series. Delve into the first work establishing this connection in a general setting, as presented by Yuhua Zhu from UCSD. Learn about an efficient algorithm for solving MAB problems based on this newly established link and discover its practical applications. Gain insights into the exploration-exploitation trade-off in sequential decision making under uncertainty, a key concept in MAB paradigms.

Syllabus

Yuhua Zhu: Continuous-in-time Limit for Bandits (UCSD)


Taught by

USC Probability and Statistics Seminar

Related Courses

Toward Generalizable Embodied AI for Machine Autonomy
Bolei Zhou via YouTube
What Are the Statistical Limits of Offline Reinforcement Learning With Function Approximation?
Simons Institute via YouTube
Better Learning from the Past - Counterfactual - Batch RL
Simons Institute via YouTube
Off-Policy Policy Optimization
Simons Institute via YouTube
Provably Efficient Reinforcement Learning with Linear Function Approximation - Chi Jin
Institute for Advanced Study via YouTube