YoVDO

Evaluating Large-Scale Learning Systems

Offered By: Scalable Parallel Computing Lab, SPCL @ ETH Zurich via YouTube

Tags

Machine Learning Courses Regular Expressions Courses Distributed Systems Courses Federated Learning Courses Model Evaluation Courses Data Privacy Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore challenges and solutions in evaluating large-scale machine learning systems through this insightful conference talk by Virginia Smith. Delve into the complexities of assessing models trained in federated networks of devices, addressing issues such as device subsampling, heterogeneity, and privacy that can impact evaluation reliability. Discover ReLM, a system designed for validating and querying large language models (LLMs), which utilizes regular expressions to enable faster and more effective LLM evaluation. Learn about the importance of faithful evaluations in deploying machine learning models and gain insights into addressing concerns such as data memorization, bias, and inappropriate language in LLMs. Recorded at SPCL_Bcast #41 on October 13, 2023, this 59-minute talk provides valuable knowledge for researchers and practitioners working with large-scale learning systems.

Syllabus

[SPCL_Bcast] Evaluating Large-Scale Learning Systems


Taught by

Scalable Parallel Computing Lab, SPCL @ ETH Zurich

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent