YoVDO

Efficiently Extending Pretrained Models to New Languages via Aligned Shallow Training

Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube

Tags

Machine Learning Courses Transformer Models Courses Embeddings Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore an innovative approach to efficiently extend pretrained Masked Language Models (MLMs) to new languages in this 11-minute conference talk from the Center for Language & Speech Processing (CLSP) at Johns Hopkins University. Dive into the concept of mini-model adaptation, a compute-efficient alternative to traditional methods that builds a shallow mini-model from a fraction of a large model's parameters. Learn about two approaches for creating mini-models: MiniJoint and MiniPost. Discover how these techniques allow for rapid cross-lingual transfer while significantly reducing computational costs. Examine experimental results from XNLI, MLQA, and PAWS-X datasets, which demonstrate that mini-model adaptation matches the performance of standard approaches while using up to 2.4x less compute. Gain insights into this cutting-edge research based on the paper "Efficiently Extending Pretrained Models to New Languages via Aligned Shallow Training" presented at ACL Findings.

Syllabus

Efficiently Extending Pretrained Models to New Languages via Aligned Shallow Training- ACL Findings


Taught by

Center for Language & Speech Processing(CLSP), JHU

Related Courses

TensorFlow on Google Cloud
Google Cloud via Coursera
Art and Science of Machine Learning 日本語版
Google Cloud via Coursera
Art and Science of Machine Learning auf Deutsch
Google Cloud via Coursera
Art and Science of Machine Learning em Português Brasileiro
Google Cloud via Coursera
Art and Science of Machine Learning en Español
Google Cloud via Coursera