Fast Language Generation by Finetuning Pretrained Transformers
Offered By: Toronto Machine Learning Series (TMLS) via YouTube
Course Description
Overview
          Explore a cutting-edge approach to improving language generation efficiency in this 31-minute talk from the Toronto Machine Learning Series. Dive into the research presented by Jungo Kasai, a Ph.D. student from the University of Washington, as he discusses a novel method to enhance the performance of large-scale transformer models. Learn about the swap-then-finetune procedure, which converts pretrained transformers into recurrent neural networks (RNNs) to reduce generation overhead while maintaining accuracy. Discover how this technique provides an improved trade-off between efficiency and accuracy compared to standard transformers and other recurrent variants. Gain insights into the lower training costs associated with this finetuning process and understand its potential impact on natural language processing tasks that rely on large-scale pretrained transformers.
        
Syllabus
Fast Language Generation by Finetuning Pretrained Transforme
Taught by
Toronto Machine Learning Series (TMLS)
Related Courses
Automata TheoryStanford University via edX Introduction to Computational Thinking and Data Science
Massachusetts Institute of Technology via edX 算法设计与分析 Design and Analysis of Algorithms
Peking University via Coursera How to Win Coding Competitions: Secrets of Champions
ITMO University via edX Introdução à Ciência da Computação com Python Parte 2
Universidade de São Paulo via Coursera
