Chatbot Arena: An Open Crowdsourced Platform for Human Feedback on LLMs
Offered By: Linux Foundation via YouTube
Course Description
Overview
Explore the innovative Chatbot Arena platform in this 27-minute conference talk by Wei-Lin Chiang from UC Berkeley and LMSYS. Discover how this open crowdsourced system evaluates large language models (LLMs) using human feedback, allowing users to compare anonymous models side-by-side and vote for superior responses. Learn about the Elo rating system's application in ranking chatbot performance and gain insights into the platform's real-world impact, having processed millions of user requests and collected over 100,000 votes. Delve into the publicly available datasets of user conversations and human preferences, and examine use cases including content moderation model development, safety benchmark creation, instruction-following model training, and challenging benchmark question formulation. For more in-depth information, refer to the associated research paper available at https://arxiv.org/abs/2309.11998.
Syllabus
Chatbot Arena: An Open Crowdsourced Platform for Human Feedback on LLMs - Wei-Lin Chiang
Taught by
Linux Foundation
Tags
Related Courses
Convolutions for Text Classification with KerasCoursera Project Network via Coursera How Technology is Shaping Democracy and the 2020 Election
Stanford University via Coursera Microsoft Azure Cognitive Services: Content Moderator
Pluralsight Machine Learning and Microsoft Cognitive Services
Pluralsight Deep Dive on Amazon Rekognition: Building Computer Visions Based Smart Applications (Italian)
Amazon Web Services via AWS Skill Builder