YoVDO

ORPO: Monolithic Preference Optimization without Reference Model

Offered By: Yannic Kilcher via YouTube

Tags

Machine Learning Courses AI Ethics Courses Model Optimization Courses Language Models Courses Supervised Fine-Tuning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive analysis of the ORPO (Odds Ratio Preference Optimization) algorithm, a groundbreaking approach to language model preference alignment without the need for a reference model. Delve into the paper's key findings, which demonstrate how ORPO eliminates the necessity for a separate preference alignment phase in language model training. Examine the empirical and theoretical evidence supporting the effectiveness of the odds ratio in contrasting favored and disfavored generation styles during supervised fine-tuning. Learn how ORPO, when applied to models like Phi-2, Llama-2, and Mistral, achieves state-of-the-art performance on benchmarks such as AlpacaEval2.0, IFEval, and MT-Bench, surpassing larger language models. Gain insights into the crucial role of supervised fine-tuning in preference alignment and understand how ORPO's innovative approach simplifies the process while maintaining high performance.

Syllabus

ORPO: Monolithic Preference Optimization without Reference Model (Paper Explained)


Taught by

Yannic Kilcher

Related Courses

3D-печать для всех и каждого
Tomsk State University via Coursera
Developing a Multidimensional Data Model
Microsoft via edX
Launching into Machine Learning 日本語版
Google Cloud via Coursera
Art and Science of Machine Learning 日本語版
Google Cloud via Coursera
Launching into Machine Learning auf Deutsch
Google Cloud via Coursera