YoVDO

Fast Language Generation by Finetuning Pretrained Transformers

Offered By: Toronto Machine Learning Series (TMLS) via YouTube

Tags

Transformers Courses GPT-3 Courses Computational Complexity Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a cutting-edge approach to improving language generation efficiency in this 31-minute talk from the Toronto Machine Learning Series. Dive into the research presented by Jungo Kasai, a Ph.D. student from the University of Washington, as he discusses a novel method to enhance the performance of large-scale transformer models. Learn about the swap-then-finetune procedure, which converts pretrained transformers into recurrent neural networks (RNNs) to reduce generation overhead while maintaining accuracy. Discover how this technique provides an improved trade-off between efficiency and accuracy compared to standard transformers and other recurrent variants. Gain insights into the lower training costs associated with this finetuning process and understand its potential impact on natural language processing tasks that rely on large-scale pretrained transformers.

Syllabus

Fast Language Generation by Finetuning Pretrained Transforme


Taught by

Toronto Machine Learning Series (TMLS)

Related Courses

How to Build Codex Solutions
Microsoft via YouTube
Unlocking the Power of OpenAI for Startups - Microsoft for Startups
Microsoft via YouTube
Building Intelligent Applications with World-Class AI
Microsoft via YouTube
Stanford Seminar - Transformers in Language: The Development of GPT Models Including GPT-3
Stanford University via YouTube
ChatGPT: GPT-3, GPT-4 Turbo: Unleash the Power of LLM's
Udemy