Towards Reliable Use of Large Language Models - Better Detection, Consistency, and Instruction-Tuning
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore cutting-edge research on improving the reliability and practical application of Large Language Models (LLMs) in this insightful lecture by Christopher D. Manning from Stanford University. Delve into three essential tools for enhancing LLM performance: ConCORD for improving consistency, DetectGPT for better detection of AI-generated text, and Direct Preference Optimization for steering LLMs using human preference data. Gain valuable insights into the challenges and solutions for implementing LLMs in production environments, and understand the importance of building a robust ecosystem around these powerful models. Learn how these advancements contribute to more consistent, detectable, and controllable language models, paving the way for their reliable use across various applications.
Syllabus
Towards Reliable Use of Large Language Models: Better Detection, Consistency, and Instruction-Tuning
Taught by
Simons Institute
Related Courses
Role of Instruction-Tuning and Prompt Engineering in Clinical Domain - MedAI 125Stanford University via YouTube Generative AI Advance Fine-Tuning for LLMs
IBM via Coursera SeaLLMs - Large Language Models for Southeast Asia
VinAI via YouTube Fine-tuning LLMs with Hugging Face SFT and QLoRA - LLMOps Techniques
LLMOps Space via YouTube Prompt Engineering Techniques for Microsoft Phi3-mini 128k Instruct
The Machine Learning Engineer via YouTube