Connections between POMDPs and Partially Observed N-Player Mean-Field Games
Offered By: GERAD Research Center via YouTube
Course Description
Overview
Explore the intricate connections between Partially Observable Markov Decision Processes (POMDPs) and partially observed n-player mean-field games in this 58-minute seminar presented by Bora Yongacoglu from the University of Toronto. Delve into a discrete-time model of mean-field games with finite players and partial global state observability, focusing on settings with mean-field observability. Discover how symmetric stationary memoryless policies of counterparts lead to a fully observed, time-homogenous MDP for a given agent, and learn about the existence of memoryless, stationary perfect equilibrium in n-player games with mean-field observability. Examine the limitations of relaxing the symmetry condition through examples, and explore scenarios with narrower observation channels where agents face POMDPs instead of MDPs, even with symmetric policies from counterparts.
Syllabus
Connections between POMDPs and partially observed n-player mean-field games, Bora Yongacoglu
Taught by
GERAD Research Center
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Decision-Making for Autonomous Systems
Chalmers University of Technology via edX Fundamentals of Reinforcement Learning
University of Alberta via Coursera A Complete Reinforcement Learning System (Capstone)
University of Alberta via Coursera An Introduction to Artificial Intelligence
Indian Institute of Technology Delhi via Swayam