YoVDO

Independent Learning Dynamics for Stochastic Games - Where Game Theory Meets

Offered By: International Mathematical Union via YouTube

Tags

Multi-Agent Reinforcement Learning Courses Game Theory Courses Nash Equilibrium Courses

Course Description

Overview

Explore a 46-minute lecture on independent learning dynamics for stochastic games in multi-agent reinforcement learning. Delve into the challenges of applying classical reinforcement learning to multi-agent scenarios and discover recently proposed independent learning dynamics that guarantee convergence in stochastic games. Examine both zero-sum and single-controller identical-interest settings, while revisiting key concepts from game theory and reinforcement learning. Learn about the mathematical novelties in analyzing these dynamics, including differential inclusion approximation and Lyapunov functions. Gain insights into topics such as Nash equilibrium, fictitious play, and model-free individual Q-learning, all within the context of dynamic multi-agent environments.

Syllabus

Introduction
Welcome
Reinforcement Learning
Nash Equilibrium
fictitious play
multiagent learning
literature review
Motivation
Outline
Stochastic Game
Optimality
Top Game Theory
Mathematical Dynamics
Learning Rates
Convergence Analysis
Differential Inclusion Approximation
Lyapunov Function
Harriss Lyapunov Function
Zero Sum Case
Zero Potential Case
Convergence
Monotonicity
ModelFree
Individual Q Learning


Taught by

International Mathematical Union

Related Courses

實驗經濟學 (Experimental Economics: Behavioral Game Theory)
National Taiwan University via Coursera
竞争策略(中文版)
Ludwig-Maximilians-Universität München via Coursera
Welcome to Game Theory
University of Tokyo via Coursera
Strategy: An Introduction to Game Theory
Indian Institute of Technology Kanpur via Swayam
Теория игр
Moscow Institute of Physics and Technology via Coursera