YoVDO

Efficiently Extending Pretrained Models to New Languages via Aligned Shallow Training

Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube

Tags

Machine Learning Courses Transformer Models Courses Embeddings Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore an innovative approach to efficiently extend pretrained Masked Language Models (MLMs) to new languages in this 11-minute conference talk from the Center for Language & Speech Processing (CLSP) at Johns Hopkins University. Dive into the concept of mini-model adaptation, a compute-efficient alternative to traditional methods that builds a shallow mini-model from a fraction of a large model's parameters. Learn about two approaches for creating mini-models: MiniJoint and MiniPost. Discover how these techniques allow for rapid cross-lingual transfer while significantly reducing computational costs. Examine experimental results from XNLI, MLQA, and PAWS-X datasets, which demonstrate that mini-model adaptation matches the performance of standard approaches while using up to 2.4x less compute. Gain insights into this cutting-edge research based on the paper "Efficiently Extending Pretrained Models to New Languages via Aligned Shallow Training" presented at ACL Findings.

Syllabus

Efficiently Extending Pretrained Models to New Languages via Aligned Shallow Training- ACL Findings


Taught by

Center for Language & Speech Processing(CLSP), JHU

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent