BOLD - Dataset and Metrics for Measuring Biases in Open-Ended Language Generation
Offered By: Association for Computing Machinery (ACM) via YouTube
Course Description
Overview
Explore a comprehensive research presentation on measuring biases in open-ended language generation through the BOLD dataset and associated metrics. Delve into the work of J. Dhamala, T. Sun, V. Kumar, S. Krishna, Y. Pruksachatkun, K. Chang, and R. Gupta as they discuss their innovative approach to identifying and quantifying biases in AI-generated text. Learn about the development of the BOLD dataset, its structure, and the metrics designed to evaluate various forms of bias in language models. Gain insights into the implications of this research for creating more equitable and responsible AI systems. This 18-minute conference talk, presented at the virtual FAccT 2021 conference, offers valuable knowledge for researchers, data scientists, and AI ethicists working on fairness and accountability in machine learning and natural language processing.
Syllabus
BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation
Taught by
ACM FAccT Conference
Related Courses
Translation Tutorial - Thinking Through and Writing About Research Ethics Beyond "Broader Impact"Association for Computing Machinery (ACM) via YouTube Translation Tutorial - Data Externalities
Association for Computing Machinery (ACM) via YouTube Translation Tutorial - Causal Fairness Analysis
Association for Computing Machinery (ACM) via YouTube Implications Tutorial - Using Harms and Benefits to Ground Practical AI Fairness Assessments
Association for Computing Machinery (ACM) via YouTube Responsible AI in Industry - Lessons Learned in Practice
Association for Computing Machinery (ACM) via YouTube