Efficient Multi-Prompt Evaluation Explained
Offered By: Unify via YouTube
Course Description
Overview
Explore a comprehensive presentation on PromptEval, a novel method for estimating large language model performance across multiple prompts. Delve into the research conducted by Felipe Polo from the University of Michigan and his co-authors, which introduces an efficient approach to evaluate LLMs under practical budget constraints. Learn how PromptEval borrows strength across prompts and examples to produce accurate performance estimates. Gain insights into the methodology, implications, and potential applications of this innovative evaluation technique in the field of AI and natural language processing.
Syllabus
Efficient Multi-Prompt Evaluation Explained
Taught by
Unify
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent