Building Better Language Models - Paradigms and Techniques
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Explore the cutting-edge developments in language model construction through this 56-minute lecture by Colin Raffel from UNC and Huggingface. Delve into various paradigms of language model development, including writing down, example-based approaches, and alternative methods. Examine the concept of zero-shot learning and its implications. Investigate the innovative Multitask Prompted Training technique and its application to NLP datasets. Learn about writing effective prompts and leveraging pretrained models. Analyze experimental results, model architectures, and adaptation outcomes through real-world examples. Discover parameter-efficient fine-tuning techniques and methods for improving model accuracy. Gain insights into pipeline development, sanity checks, and the importance of relevant context in language modeling. This comprehensive talk, part of the CSCI 601.771: Self-Supervised Learning Course at Johns Hopkins University, offers valuable knowledge for researchers and practitioners in the field of natural language processing and machine learning.
Syllabus
Intro
Paradigm 1 Writing Down
Paradigm 2 Example
Paradigm 3 Example
How are most language models built
Zeroshot learning
Alternative Paradigm
Multitask Prompted Training
NLP Data Sets
Writing Prompts
Pretrained Models
Paper
Results
Model Architecture
Experimental Results
Adaptation Results
Example in Context
Parameter Efficient Finetuning
Learning Facts
Pipeline
Sanity Check
Model Accuracy
Relevant Context
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
Generative AI Engineering and Fine-Tuning TransformersIBM via Coursera Lessons From Fine-Tuning Llama-2
Anyscale via YouTube The Next Million AI Apps - Developing Custom Models for Specialized Tasks
MLOps.community via YouTube LLM Fine-Tuning - Explained
CodeEmporium via YouTube Fine-tuning Large Models on Local Hardware Using PEFT and Quantization
EuroPython Conference via YouTube