YoVDO

Efficient Multi-Prompt Evaluation Explained

Offered By: Unify via YouTube

Tags

Machine Learning Courses Artificial Intelligence Courses Statistical Methods Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive presentation on PromptEval, a novel method for estimating large language model performance across multiple prompts. Delve into the research conducted by Felipe Polo from the University of Michigan and his co-authors, which introduces an efficient approach to evaluate LLMs under practical budget constraints. Learn how PromptEval borrows strength across prompts and examples to produce accurate performance estimates. Gain insights into the methodology, implications, and potential applications of this innovative evaluation technique in the field of AI and natural language processing.

Syllabus

Efficient Multi-Prompt Evaluation Explained


Taught by

Unify

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent