Learning to Cooperate and Compete via Self Play
Offered By: Cooperative AI Foundation via YouTube
Course Description
Overview
Explore the intricacies of multi-agent artificial intelligence in this lecture from the 2023 Cooperative AI Summer School. Delve into the world of imperfect-information games as Noam Brown, a renowned researcher at OpenAI, shares insights on learning to cooperate and compete through self-play. Discover the groundbreaking work behind Libratus and Pluribus, the first AI systems to defeat top human players in two-player and multiplayer no-limit poker. Gain valuable knowledge from Brown's expertise, which has earned him accolades such as the Marvin Minsky Medal for Outstanding Achievements in AI and recognition as one of MIT Tech Review's 35 Innovators Under 35. Uncover the scientific breakthroughs that led to Pluribus being named one of the top 10 scientific achievements by Science Magazine. Learn from Brown's distinguished career, including his time at Facebook AI Research and his award-winning PhD work at Carnegie Mellon University, as he presents cutting-edge concepts in cooperative and competitive AI strategies.
Syllabus
Learning to Cooperate and Compete via Self Play
Taught by
Cooperative AI Foundation
Related Courses
Stanford Seminar - Failures & Where to Find Them: Considering Safety as a Function of StructureStanford University via YouTube Modeling Conceptual Understanding in Image Reference Games - CVPR 2020 Tutorial
Bolei Zhou via YouTube Multi-Agent Reinforcement Learning - Part II
Simons Institute via YouTube AI- From Algorithms to Ethics - ACM WomENcourage 2020
Association for Computing Machinery (ACM) via YouTube Python Reinforcement Learning using OpenAI Gymnasium – Full Course
freeCodeCamp