A Smoothed Analysis of the Greedy Algorithm for Linear Contextual Bandits - Theory Seminar
Offered By: Paul G. Allen School via YouTube
Course Description
Overview
Explore a theory seminar on the smoothed analysis of the greedy algorithm in bandit learning. Delve into the tension between exploration and exploitation, particularly in high-stakes decision-making scenarios involving individuals. Examine how the greedy algorithm, which prioritizes immediate optimal decisions, can be analyzed in linear contextual bandit problems. Learn about the smoothed analysis approach, which demonstrates that small perturbations in adversarial context choices can lead to "no regret" performance. Investigate the implications for balancing exploration and exploitation in slightly perturbed environments. Cover topics such as classic algorithm design, online algorithms, online ML algorithms, single and multi-parameter models, regret analysis, diversity, and margins. Gain insights into the potential benefits and applications of greedy algorithms in various settings.
Syllabus
Intro
meta-question
Classic Algorithm Design
Online Algorithms
Online ML Algorithms
Outline
Single-parameter model
Multi-parameter model
Regret wrt M
(good) performance of greedy algorithms?
Single-parameter regime
Multi-parameter regime
A change in perspective
Diversity
Margins
Why might we use greedy?
Taught by
Paul G. Allen School
Related Courses
Aprende a tomar decisiones económicas acertadasUniversidad Rey Juan Carlos via MirÃadax Mathematics
Serious Science via YouTube Economics
Serious Science via YouTube Subgame Perfect Equilibrium - Wars of Attrition in Game Theory - Lecture 20
Yale University via YouTube Economic Decisions for the Foraging Individual - Lecture 32
Yale University via YouTube