YoVDO

Author Interview - Memory-Assisted Prompt Editing to Improve GPT-3 After Deployment

Offered By: Yannic Kilcher via YouTube

Tags

GPT-3 Courses Recommender Systems Courses ChatGPT Courses

Course Description

Overview

Explore an in-depth interview with authors Aman Madaan and Niket Tandon discussing their research on improving GPT-3 performance after deployment without model retraining. Learn about their innovative method of maintaining a memory of interactions to dynamically adapt new prompts, enabling non-intrusive fine-tuning and personalization. Discover insights on motivations, experimental results, cross-lingual customization, and potential applications in recommender systems. Gain valuable knowledge about interacting with large language models and the challenges faced during the project. Delve into discussions on model size implications, clarification prompts, and future directions for enhancing very large pre-trained language models.

Syllabus

- Intro
- Paper Overview
- What was your original motivation?
- There is an updated version of the paper!
- Have you studied this on real-world users?
- How does model size play into providing feedback?
- Can this be used for personalization?
- Discussing experimental results
- Can this be paired with recommender systems?
- What are obvious next steps to make the system more powerful?
- Clarifying the baseline methods
- Exploring cross-lingual customization
- Where did the idea for the clarification prompt come from?
- What did not work out during this project?
- What did you learn about interacting with large models?
- Final thoughts


Taught by

Yannic Kilcher

Related Courses

ChatGPT et IA : mode d'emploi pour managers et RH
CNAM via France Université Numerique
Generating New Recipes using GPT-2
Coursera Project Network via Coursera
Deep Learning NLP: Training GPT-2 from scratch
Coursera Project Network via Coursera
Data Science A-Z: Hands-On Exercises & ChatGPT Prize [2024]
Udemy
Deep Learning A-Z 2024: Neural Networks, AI & ChatGPT Prize
Udemy