Instruction Tuning of Large Language Models - Lecture
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Syllabus
Intro
ChatGPT/GPT4 are real generalists
How did models acquire the vast capabilities?
NLP before 2018: building task-specific models
Classical multi-task learning
Generalization to unseen tasks via instructions
Expert-written instructions for all tasks
Strict train/test split for cross-task generalization
Instruction tuning significantly improves LLMs
What are the most important factors?
Other models trained on existing NLP datasets
Data is OpenAl's secret weapon
Can we construct a similar instruction dataset by crowdsourcing?
LLMs can be prompted to generate instructions
LM can be prompted to generate instances
Instruction data generation pipeline
Generating 52K instructions with GPT3
Tasks generated by GPT3
Data quality review
Performance on SuperNI
Expert evaluation on 252 user-oriented instructions
Effect of data size and data quality (using human eval)
Takeaways
Licensing concern about using OpenAl output?
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
AI Foundations: Prompt Engineering with ChatGPTArizona State University via Coursera AI para docentes: Transforma tu enseñanza con ChatGPT
Universidad Anáhuac via Coursera Intro to AI for Digital Marketing
Davidson College via edX AI Prompt Engineering for Beginners
Davidson College via edX Herramientas de Inteligencia Artificial para la productividad. Más allá del ChatGPT
Universitat Politècnica de València via edX