Multimodal Representation Learning for Vision and Language
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Explore multimodal representation learning for vision and language in this 56-minute lecture by Kai-Wei Chang from UCLA. Delve into the challenges of cross-modality decision-making in artificial intelligence tasks, such as answering complex questions about images. Learn about recent advances in representation learning that map data from different modalities into shared embedding spaces, enabling cross-domain knowledge transfer through vector transformations. Discover the speaker's recent efforts in building multi-modal representations for vision-language understanding, including training on weakly-supervised image captioning data and unsupervised image and text corpora. Understand how these models can ground language elements to image regions without explicit supervision. Examine a wide range of vision and language applications and discuss remaining challenges in the field. Gain insights from Dr. Chang, an associate professor at UCLA, whose research focuses on robust machine learning methods and fair, reliable language processing technologies for social good applications.
Syllabus
Multimodal Representation Learning for Vision and Language - Kai-Wei Chang (UCLA)
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
Facebook: Product Optimization with Adaptive Experimentation - F8 2019Meta via YouTube Improving Conversational AI - Advancements in NLP Research at Facebook
Meta via YouTube SILCO: Show a Few Images, Localize the Common Object
University of Central Florida via YouTube Open AI's Whisper Is Amazing
sentdex via YouTube Annotation-Efficient Object Detection: Unsupervised Discovery to Active Learning
VinAI via YouTube