Multilingual Representation Distillation with Contrastive Learning
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Explore multilingual sentence representations and their application in cross-lingual information retrieval through this 23-minute conference talk by Steven Tan from Johns Hopkins University's Center for Language & Speech Processing. Delve into the integration of contrastive learning with multilingual representation distillation for quality estimation of parallel sentences. Discover how this approach enhances multilingual similarity search and corpus filtering tasks, particularly in low-resource languages. Learn about the significant performance improvements achieved over previous sentence encoders like LASER, LASER3, and LaBSE, as demonstrated through extensive experiments.
Syllabus
Multilingual Representation Distillation with Contrastive Learning - Steven Tan (JHU)
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
Stanford Seminar - Audio Research: Transformers for Applications in Audio, Speech and MusicStanford University via YouTube How to Represent Part-Whole Hierarchies in a Neural Network - Geoff Hinton's Paper Explained
Yannic Kilcher via YouTube OpenAI CLIP - Connecting Text and Images - Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube Learning Compact Representation with Less Labeled Data from Sensors
tinyML via YouTube Human Activity Recognition - Learning with Less Labels and Privacy Preservation
University of Central Florida via YouTube