Evaluation Techniques for Large Language Models
Offered By: MLOps World: Machine Learning in Production via YouTube
Course Description
Overview
Explore practical tools and best practices for evaluating and choosing Large Language Models (LLMs) in this comprehensive tutorial presented by Rajiv Shah, Machine Learning Engineer at Hugging Face. Gain insights into the capabilities of LLMs compared to traditional ML models and learn various evaluation techniques, including evaluation suites, head-to-head competition approaches, and using LLMs to evaluate other LLMs. Delve into the subtle factors affecting evaluation, such as the role of prompts, tokenization, and requirements for factual accuracy. Examine model bias and ethical considerations through working examples. Acquire an in-depth understanding of LLM evaluation tradeoffs and methods, with reusable code provided in Jupyter Notebooks for each technique discussed.
Syllabus
Evaluation Techniques for Large Language Models
Taught by
MLOps World: Machine Learning in Production
Related Courses
Hugging Face on Azure - Partnership and Solutions AnnouncementMicrosoft via YouTube Question Answering in Azure AI - Custom and Prebuilt Solutions - Episode 49
Microsoft via YouTube Open Source Platforms for MLOps
Duke University via Coursera Masked Language Modelling - Retraining BERT with Hugging Face Trainer - Coding Tutorial
rupert ai via YouTube Masked Language Modelling with Hugging Face - Microsoft Sentence Completion - Coding Tutorial
rupert ai via YouTube