YoVDO

Evaluation Measures for Search and Recommender Systems

Offered By: James Briggs via YouTube

Tags

Recommender Systems Courses Python Courses

Course Description

Overview

Explore popular offline metrics for evaluating search and recommender systems in this 31-minute video. Learn about Recall@K, Mean Reciprocal Rank (MRR), Mean Average Precision@K (MAP@K), and Normalized Discounted Cumulative Gain (NDCG@K), with Python demonstrations for each metric. Understand the importance of evaluation measures in information retrieval systems, their impact on big tech companies' success, and how to make informed design decisions. Gain insights into dataset preparation, retrieval basics, and the pros and cons of various evaluation metrics. Access additional resources, including a related Pinecone article, code notebooks, and a discounted NLP course to further enhance your knowledge in this critical area of technology.

Syllabus

Intro
Offline Metrics
Dataset and Retrieval 101
Recall@K
Recall@K in Python
Disadvantages of Recall@K
MRR
MRR in Python
MAP@K
MAP@K in Python
NDCG@K
Pros and Cons of NDCG@K
Final Thoughts


Taught by

James Briggs

Related Courses

Introduction to Recommender Systems
University of Minnesota via Coursera
Text Retrieval and Search Engines
University of Illinois at Urbana-Champaign via Coursera
Machine Learning: Recommender Systems & Dimensionality Reduction
University of Washington via Coursera
Java Programming: Build a Recommendation System
Duke University via Coursera
Introduction to Recommender Systems: Non-Personalized and Content-Based
University of Minnesota via Coursera