Independent Learning Dynamics for Stochastic Games - Where Game Theory Meets
Offered By: International Mathematical Union via YouTube
Course Description
Overview
Explore a 46-minute lecture on independent learning dynamics for stochastic games in multi-agent reinforcement learning. Delve into the challenges of applying classical reinforcement learning to multi-agent scenarios and discover recently proposed independent learning dynamics that guarantee convergence in stochastic games. Examine both zero-sum and single-controller identical-interest settings, while revisiting key concepts from game theory and reinforcement learning. Learn about the mathematical novelties in analyzing these dynamics, including differential inclusion approximation and Lyapunov functions. Gain insights into topics such as Nash equilibrium, fictitious play, and model-free individual Q-learning, all within the context of dynamic multi-agent environments.
Syllabus
Introduction
Welcome
Reinforcement Learning
Nash Equilibrium
fictitious play
multiagent learning
literature review
Motivation
Outline
Stochastic Game
Optimality
Top Game Theory
Mathematical Dynamics
Learning Rates
Convergence Analysis
Differential Inclusion Approximation
Lyapunov Function
Harriss Lyapunov Function
Zero Sum Case
Zero Potential Case
Convergence
Monotonicity
ModelFree
Individual Q Learning
Taught by
International Mathematical Union
Related Courses
Game TheoryStanford University via Coursera Model Thinking
University of Michigan via Coursera Online Games: Literature, New Media, and Narrative
Vanderbilt University via Coursera Games without Chance: Combinatorial Game Theory
Georgia Institute of Technology via Coursera Competitive Strategy
Ludwig-Maximilians-Universität München via Coursera