YoVDO

Evaluating Large-Scale Learning Systems

Offered By: Scalable Parallel Computing Lab, SPCL @ ETH Zurich via YouTube

Tags

Machine Learning Courses Regular Expressions Courses Distributed Systems Courses Federated Learning Courses Model Evaluation Courses Data Privacy Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore challenges and solutions in evaluating large-scale machine learning systems through this insightful conference talk by Virginia Smith. Delve into the complexities of assessing models trained in federated networks of devices, addressing issues such as device subsampling, heterogeneity, and privacy that can impact evaluation reliability. Discover ReLM, a system designed for validating and querying large language models (LLMs), which utilizes regular expressions to enable faster and more effective LLM evaluation. Learn about the importance of faithful evaluations in deploying machine learning models and gain insights into addressing concerns such as data memorization, bias, and inappropriate language in LLMs. Recorded at SPCL_Bcast #41 on October 13, 2023, this 59-minute talk provides valuable knowledge for researchers and practitioners working with large-scale learning systems.

Syllabus

[SPCL_Bcast] Evaluating Large-Scale Learning Systems


Taught by

Scalable Parallel Computing Lab, SPCL @ ETH Zurich

Related Courses

Macroeconometric Forecasting
International Monetary Fund via edX
Machine Learning With Big Data
University of California, San Diego via Coursera
Data Science at Scale - Capstone Project
University of Washington via Coursera
Structural Equation Model and its Applications | 结构方程模型及其应用 (粤语)
The Chinese University of Hong Kong via Coursera
Data Science in Action - Building a Predictive Churn Model
SAP Learning