Connections between POMDPs and Partially Observed N-Player Mean-Field Games
Offered By: GERAD Research Center via YouTube
Course Description
Overview
Explore the intricate connections between Partially Observable Markov Decision Processes (POMDPs) and partially observed n-player mean-field games in this 58-minute seminar presented by Bora Yongacoglu from the University of Toronto. Delve into a discrete-time model of mean-field games with finite players and partial global state observability, focusing on settings with mean-field observability. Discover how symmetric stationary memoryless policies of counterparts lead to a fully observed, time-homogenous MDP for a given agent, and learn about the existence of memoryless, stationary perfect equilibrium in n-player games with mean-field observability. Examine the limitations of relaxing the symmetry condition through examples, and explore scenarios with narrower observation channels where agents face POMDPs instead of MDPs, even with symmetric policies from counterparts.
Syllabus
Connections between POMDPs and partially observed n-player mean-field games, Bora Yongacoglu
Taught by
GERAD Research Center
Related Courses
Aprende a tomar decisiones económicas acertadasUniversidad Rey Juan Carlos via MirÃadax Mathematics
Serious Science via YouTube Economics
Serious Science via YouTube Subgame Perfect Equilibrium - Wars of Attrition in Game Theory - Lecture 20
Yale University via YouTube Economic Decisions for the Foraging Individual - Lecture 32
Yale University via YouTube