A Smoothed Analysis of the Greedy Algorithm for Linear Contextual Bandits - Theory Seminar
Offered By: Paul G. Allen School via YouTube
Course Description
Overview
Explore a theory seminar on the smoothed analysis of the greedy algorithm in bandit learning. Delve into the tension between exploration and exploitation, particularly in high-stakes decision-making scenarios involving individuals. Examine how the greedy algorithm, which prioritizes immediate optimal decisions, can be analyzed in linear contextual bandit problems. Learn about the smoothed analysis approach, which demonstrates that small perturbations in adversarial context choices can lead to "no regret" performance. Investigate the implications for balancing exploration and exploitation in slightly perturbed environments. Cover topics such as classic algorithm design, online algorithms, online ML algorithms, single and multi-parameter models, regret analysis, diversity, and margins. Gain insights into the potential benefits and applications of greedy algorithms in various settings.
Syllabus
Intro
meta-question
Classic Algorithm Design
Online Algorithms
Online ML Algorithms
Outline
Single-parameter model
Multi-parameter model
Regret wrt M
(good) performance of greedy algorithms?
Single-parameter regime
Multi-parameter regime
A change in perspective
Diversity
Margins
Why might we use greedy?
Taught by
Paul G. Allen School
Related Courses
Algorithms: Design and Analysis, Part 2Stanford University via Coursera Conception et mise en œuvre d'algorithmes.
École Polytechnique via Coursera Biology Meets Programming: Bioinformatics for Beginners
University of California, San Diego via Coursera 算法基础
Peking University via Coursera 算法基础 | Fundamental Algorithms
Peking University via edX