YoVDO

Evaluation Techniques for Large Language Models

Offered By: MLOps World: Machine Learning in Production via YouTube

Tags

MLOps Courses Jupyter Notebooks Courses Prompt Engineering Courses Ethics in AI Courses Hugging Face Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore practical tools and best practices for evaluating and choosing Large Language Models (LLMs) in this comprehensive tutorial presented by Rajiv Shah, Machine Learning Engineer at Hugging Face. Gain insights into the capabilities of LLMs compared to traditional ML models and learn various evaluation techniques, including evaluation suites, head-to-head competition approaches, and using LLMs to evaluate other LLMs. Delve into the subtle factors affecting evaluation, such as the role of prompts, tokenization, and requirements for factual accuracy. Examine model bias and ethical considerations through working examples. Acquire an in-depth understanding of LLM evaluation tradeoffs and methods, with reusable code provided in Jupyter Notebooks for each technique discussed.

Syllabus

Evaluation Techniques for Large Language Models


Taught by

MLOps World: Machine Learning in Production

Related Courses

Machine Learning Operations (MLOps): Getting Started
Google Cloud via Coursera
Проектирование и реализация систем машинного обучения
Higher School of Economics via Coursera
Demystifying Machine Learning Operations (MLOps)
Pluralsight
Machine Learning Engineer with Microsoft Azure
Microsoft via Udacity
Machine Learning Engineering for Production (MLOps)
DeepLearning.AI via Coursera