ORPO: Monolithic Preference Optimization without Reference Model
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore a comprehensive analysis of the ORPO (Odds Ratio Preference Optimization) algorithm, a groundbreaking approach to language model preference alignment without the need for a reference model. Delve into the paper's key findings, which demonstrate how ORPO eliminates the necessity for a separate preference alignment phase in language model training. Examine the empirical and theoretical evidence supporting the effectiveness of the odds ratio in contrasting favored and disfavored generation styles during supervised fine-tuning. Learn how ORPO, when applied to models like Phi-2, Llama-2, and Mistral, achieves state-of-the-art performance on benchmarks such as AlpacaEval2.0, IFEval, and MT-Bench, surpassing larger language models. Gain insights into the crucial role of supervised fine-tuning in preference alignment and understand how ORPO's innovative approach simplifies the process while maintaining high performance.
Syllabus
ORPO: Monolithic Preference Optimization without Reference Model (Paper Explained)
Taught by
Yannic Kilcher
Related Courses
Microsoft Bot Framework and Conversation as a PlatformMicrosoft via edX Unlocking the Power of OpenAI for Startups - Microsoft for Startups
Microsoft via YouTube Improving Customer Experiences with Speech to Text and Text to Speech
Microsoft via YouTube Stanford Seminar - Deep Learning in Speech Recognition
Stanford University via YouTube Select Topics in Python: Natural Language Processing
Codio via Coursera