Modeling Conceptual Understanding in Image Reference Games - CVPR 2020 Tutorial
Offered By: Bolei Zhou via YouTube
Course Description
Overview
Explore a CVPR'20 iMLCV tutorial on modeling conceptual understanding in image reference games presented by Zeynep Akata. Delve into topics such as learning via explanation, attributes and natural language as explanations, grounding visual explanations, and machine theory of mind. Examine the implementation of perceptual modules, agent embeddings, and policy learning in image reference games with failure in concept understanding. Analyze the comparison of learned policies against baselines, evaluate cluster quality, and review qualitative results of modeling conceptual understanding. Gain insights into rational quantitative attribution of beliefs, desires, and percepts in human mentalizing, as well as mind-aware multi-agent management reinforcement learning.
Syllabus
Intro
Outline
Learning via Explanation
Attributes as Explanations
Natural Language as Explanations for Communication
Grounding Visual Explanations
Rational Quantitative Attribution of Beliefs, Desires and Percepts in Human Mentalizing
Machine Theory of Mind
M'RL: Mind-aware Multi-agent Management Reinforcement Learning
Image Reference Games with Failure in Concept Understanding
Perceptual Modules (PM)
Agent Embedding (AE)
Policy Learning: Different Policies Implemented Here
Comparing Learned Policies vs Baselines
Showing Necessity of Agent Embeddings
Evaluating Cluster Quality
Modeling Conceptual Understanding Qualitative Results
Conclusions
Taught by
Bolei Zhou
Related Courses
Computational NeuroscienceUniversity of Washington via Coursera Reinforcement Learning
Brown University via Udacity Reinforcement Learning
Indian Institute of Technology Madras via Swayam FA17: Machine Learning
Georgia Institute of Technology via edX Introduction to Reinforcement Learning
Higher School of Economics via Coursera