Stanford Seminar - Human-AI Interaction Under Societal Disagreement
Offered By: Stanford University via YouTube
Course Description
Overview
Explore a thought-provoking Stanford seminar on human-AI interaction in the face of societal disagreement. Delve into the challenges of developing machine learning algorithms that must navigate conflicting perspectives on ground truth across various AI applications. Learn about Jury Learning, an innovative interactive AI architecture that allows developers to explicitly consider whose voices should influence model predictions. Discover the Disagreement Deconvolution metric, which reveals how current evaluation methods may overstate the performance of user-facing tasks. Gain insights into a new pipeline for encoding human values and goals in AI systems, bridging HCI principles with machine learning realities. Presented by Mitchell Gordon, a Stanford University PhD student in Human-Computer Interaction, this 53-minute seminar offers valuable perspectives on addressing societal disagreements in AI development and evaluation.
Syllabus
Stanford Seminar - Human-AI Interaction Under Societal Disagreement
Taught by
Stanford Online
Tags
Related Courses
Macroeconometric ForecastingInternational Monetary Fund via edX Machine Learning With Big Data
University of California, San Diego via Coursera Data Science at Scale - Capstone Project
University of Washington via Coursera Structural Equation Model and its Applications | 结构方程模型及其应用 (粤语)
The Chinese University of Hong Kong via Coursera Data Science in Action - Building a Predictive Churn Model
SAP Learning