YoVDO

ORPO: Monolithic Preference Optimization without Reference Model

Offered By: Yannic Kilcher via YouTube

Tags

Machine Learning Courses AI Ethics Courses Model Optimization Courses Language Models Courses Supervised Fine-Tuning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive analysis of the ORPO (Odds Ratio Preference Optimization) algorithm, a groundbreaking approach to language model preference alignment without the need for a reference model. Delve into the paper's key findings, which demonstrate how ORPO eliminates the necessity for a separate preference alignment phase in language model training. Examine the empirical and theoretical evidence supporting the effectiveness of the odds ratio in contrasting favored and disfavored generation styles during supervised fine-tuning. Learn how ORPO, when applied to models like Phi-2, Llama-2, and Mistral, achieves state-of-the-art performance on benchmarks such as AlpacaEval2.0, IFEval, and MT-Bench, surpassing larger language models. Gain insights into the crucial role of supervised fine-tuning in preference alignment and understand how ORPO's innovative approach simplifies the process while maintaining high performance.

Syllabus

ORPO: Monolithic Preference Optimization without Reference Model (Paper Explained)


Taught by

Yannic Kilcher

Related Courses

Knowledge-Based AI: Cognitive Systems
Georgia Institute of Technology via Udacity
AI for Everyone: Master the Basics
IBM via edX
Introducción a La Inteligencia Artificial (IA)
IBM via Coursera
AI for Legal Professionals (I): Law and Policy
National Chiao Tung University via FutureLearn
Artificial Intelligence Ethics in Action
LearnQuest via Coursera