YoVDO

Multilingual Representation Distillation with Contrastive Learning

Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube

Tags

Contrastive Learning Courses Low-Resource Languages Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 12-minute conference talk from the European Chapter of the Association for Computational Linguistics (EACL) 2023, presented by Steven Weiting Tan from the Center for Language & Speech Processing (CLSP) at Johns Hopkins University. Dive into the innovative approach of integrating contrastive learning into multilingual representation distillation for quality estimation of parallel sentences. Discover how this method enhances the ability to find semantically similar sentences that can be used as translations across different languages. Learn about the experimental results that demonstrate significant improvements over previous sentence encoders like LASER, LASER3, and LaBSE, particularly in low-resource language scenarios. Gain insights into the applications of this technique in multilingual similarity search and corpus filtering tasks, and understand its potential impact on cross-lingual information retrieval and matching.

Syllabus

Multilingual Representation Distillation with Contrastive Learning - EACL 2023


Taught by

Center for Language & Speech Processing(CLSP), JHU

Related Courses

Stanford Seminar - Audio Research: Transformers for Applications in Audio, Speech and Music
Stanford University via YouTube
How to Represent Part-Whole Hierarchies in a Neural Network - Geoff Hinton's Paper Explained
Yannic Kilcher via YouTube
OpenAI CLIP - Connecting Text and Images - Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube
Learning Compact Representation with Less Labeled Data from Sensors
tinyML via YouTube
Human Activity Recognition - Learning with Less Labels and Privacy Preservation
University of Central Florida via YouTube