LLM Alignment - Techniques for Building Human-Aligned AI
Offered By: Data Science Dojo via YouTube
Course Description
Overview
Explore the cutting-edge techniques for aligning Large Language Models (LLMs) with human values and ethics in this informative webinar. Delve into the evolution of LLMs from their inception to their current sophisticated forms, and discover how advanced alignment methodologies are shaping the future of AI. Learn about crucial strategies such as Reinforcement Learning from Human Feedback (RLHF), Instruction Fine-Tuning (IFT), and Direct Preference Optimization (DPO) that are making AI systems safer and more reliable. Gain insights into the progression from early models to advanced LLMs, understand the importance of RLHF in aligning AI with human values, and explore the effectiveness of IFT and DPO in refining LLM responses. Engage in discussions about ongoing challenges and ethical considerations in AI alignment. Join Hoang Tran, Senior Research Scientist at Snorkel AI, for this hour-long session that promises to deepen your understanding of building human-aligned AI systems.
Syllabus
LLM Alignment: Techniques for Building Human-Aligned AI
Taught by
Data Science Dojo
Related Courses
Knowledge-Based AI: Cognitive SystemsGeorgia Institute of Technology via Udacity AI for Everyone: Master the Basics
IBM via edX Introducción a La Inteligencia Artificial (IA)
IBM via Coursera AI for Legal Professionals (I): Law and Policy
National Chiao Tung University via FutureLearn Artificial Intelligence Ethics in Action
LearnQuest via Coursera