Prompt Optimization and Parameter Efficient Fine Tuning for Large Language Models
Offered By: Toronto Machine Learning Series (TMLS) via YouTube
Course Description
Overview
Explore the cutting-edge techniques of prompt optimization and parameter efficient fine-tuning (PEFT) in this 28-minute conference talk from the Toronto Machine Learning Series. Delve into the growing importance of prompting and prompt design as large language models (LLMs) become increasingly generalizable. Discover how well-constructed prompts can significantly enhance LLM performance across various downstream tasks. Examine the challenges of manual prompt optimization and learn about state-of-the-art optimization techniques, including both discrete and continuous approaches. Investigate PEFT methods, with a focus on Adapters and LoRA, and understand how these approaches can match or surpass full-model fine-tuning performance on many tasks. Gain valuable insights from David Emerson, an Applied Machine Learning Scientist at the Vector Institute, as he shares his expertise in this rapidly evolving field of AI research.
Syllabus
Prompt Optimization and Parameter Efficient Fine Tuning
Taught by
Toronto Machine Learning Series (TMLS)
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Artificial Intelligence for Robotics
Stanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent