Author Interview - Memory-Assisted Prompt Editing to Improve GPT-3 After Deployment
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore an in-depth interview with authors Aman Madaan and Niket Tandon discussing their research on improving GPT-3 performance after deployment without model retraining. Learn about their innovative method of maintaining a memory of interactions to dynamically adapt new prompts, enabling non-intrusive fine-tuning and personalization. Discover insights on motivations, experimental results, cross-lingual customization, and potential applications in recommender systems. Gain valuable knowledge about interacting with large language models and the challenges faced during the project. Delve into discussions on model size implications, clarification prompts, and future directions for enhancing very large pre-trained language models.
Syllabus
- Intro
- Paper Overview
- What was your original motivation?
- There is an updated version of the paper!
- Have you studied this on real-world users?
- How does model size play into providing feedback?
- Can this be used for personalization?
- Discussing experimental results
- Can this be paired with recommender systems?
- What are obvious next steps to make the system more powerful?
- Clarifying the baseline methods
- Exploring cross-lingual customization
- Where did the idea for the clarification prompt come from?
- What did not work out during this project?
- What did you learn about interacting with large models?
- Final thoughts
Taught by
Yannic Kilcher
Related Courses
Introduction to Recommender SystemsUniversity of Minnesota via Coursera Text Retrieval and Search Engines
University of Illinois at Urbana-Champaign via Coursera Machine Learning: Recommender Systems & Dimensionality Reduction
University of Washington via Coursera Java Programming: Build a Recommendation System
Duke University via Coursera Introduction to Recommender Systems: Non-Personalized and Content-Based
University of Minnesota via Coursera