Chatbot Arena: An Open Crowdsourced Platform for Human Feedback on LLMs
Offered By: Linux Foundation via YouTube
Course Description
Overview
Explore the innovative Chatbot Arena platform in this 27-minute conference talk by Wei-Lin Chiang from UC Berkeley and LMSYS. Discover how this open crowdsourced system evaluates large language models (LLMs) using human feedback, allowing users to compare anonymous models side-by-side and vote for superior responses. Learn about the Elo rating system's application in ranking chatbot performance and gain insights into the platform's real-world impact, having processed millions of user requests and collected over 100,000 votes. Delve into the publicly available datasets of user conversations and human preferences, and examine use cases including content moderation model development, safety benchmark creation, instruction-following model training, and challenging benchmark question formulation. For more in-depth information, refer to the associated research paper available at https://arxiv.org/abs/2309.11998.
Syllabus
Chatbot Arena: An Open Crowdsourced Platform for Human Feedback on LLMs - Wei-Lin Chiang
Taught by
Linux Foundation
Tags
Related Courses
How to Build a Chatbot Without CodingIBM via Coursera Building Bots for Journalism: Software You Talk With
Knight Center for Journalism in the Americas via Independent Microsoft Bot Framework and Conversation as a Platform
Microsoft via edX AI Chatbots without Programming
IBM via edX Smarter Chatbots with Node-RED and Watson AI
IBM via edX