Prompt Optimization and Parameter Efficient Fine Tuning for Large Language Models
Offered By: Toronto Machine Learning Series (TMLS) via YouTube
Course Description
Overview
Explore the cutting-edge techniques of prompt optimization and parameter efficient fine-tuning (PEFT) in this 28-minute conference talk from the Toronto Machine Learning Series. Delve into the growing importance of prompting and prompt design as large language models (LLMs) become increasingly generalizable. Discover how well-constructed prompts can significantly enhance LLM performance across various downstream tasks. Examine the challenges of manual prompt optimization and learn about state-of-the-art optimization techniques, including both discrete and continuous approaches. Investigate PEFT methods, with a focus on Adapters and LoRA, and understand how these approaches can match or surpass full-model fine-tuning performance on many tasks. Gain valuable insights from David Emerson, an Applied Machine Learning Scientist at the Vector Institute, as he shares his expertise in this rapidly evolving field of AI research.
Syllabus
Prompt Optimization and Parameter Efficient Fine Tuning
Taught by
Toronto Machine Learning Series (TMLS)
Related Courses
How to Do Stable Diffusion LORA Training by Using Web UI on Different ModelsSoftware Engineering Courses - SE Courses via YouTube MicroPython & WiFi
Kevin McAleer via YouTube Building a Wireless Community Sensor Network with LoRa
Hackaday via YouTube ComfyUI - Node Based Stable Diffusion UI
Olivio Sarikas via YouTube AI Masterclass for Everyone - Stable Diffusion, ControlNet, Depth Map, LORA, and VR
Hugh Hou via YouTube