YoVDO

Efficiently Extending Pretrained Models to New Languages via Aligned Shallow Training

Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube

Tags

Machine Learning Courses Transformer Models Courses Embeddings Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore an innovative approach to efficiently extend pretrained Masked Language Models (MLMs) to new languages in this 11-minute conference talk from the Center for Language & Speech Processing (CLSP) at Johns Hopkins University. Dive into the concept of mini-model adaptation, a compute-efficient alternative to traditional methods that builds a shallow mini-model from a fraction of a large model's parameters. Learn about two approaches for creating mini-models: MiniJoint and MiniPost. Discover how these techniques allow for rapid cross-lingual transfer while significantly reducing computational costs. Examine experimental results from XNLI, MLQA, and PAWS-X datasets, which demonstrate that mini-model adaptation matches the performance of standard approaches while using up to 2.4x less compute. Gain insights into this cutting-edge research based on the paper "Efficiently Extending Pretrained Models to New Languages via Aligned Shallow Training" presented at ACL Findings.

Syllabus

Efficiently Extending Pretrained Models to New Languages via Aligned Shallow Training- ACL Findings


Taught by

Center for Language & Speech Processing(CLSP), JHU

Related Courses

Sequence Models
DeepLearning.AI via Coursera
Modern Natural Language Processing in Python
Udemy
Stanford Seminar - Transformers in Language: The Development of GPT Models Including GPT-3
Stanford University via YouTube
Long Form Question Answering in Haystack
James Briggs via YouTube
Spotify's Podcast Search Explained
James Briggs via YouTube