YoVDO

Multimodal Representation Learning for Vision and Language

Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube

Tags

Artificial Intelligence Courses Machine Learning Courses Computer Vision Courses Image Captioning Courses Weakly Supervised Learning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore multimodal representation learning for vision and language in this 56-minute lecture by Kai-Wei Chang from UCLA. Delve into the challenges of cross-modality decision-making in artificial intelligence tasks, such as answering complex questions about images. Learn about recent advances in representation learning that map data from different modalities into shared embedding spaces, enabling cross-domain knowledge transfer through vector transformations. Discover the speaker's recent efforts in building multi-modal representations for vision-language understanding, including training on weakly-supervised image captioning data and unsupervised image and text corpora. Understand how these models can ground language elements to image regions without explicit supervision. Examine a wide range of vision and language applications and discuss remaining challenges in the field. Gain insights from Dr. Chang, an associate professor at UCLA, whose research focuses on robust machine learning methods and fair, reliable language processing technologies for social good applications.

Syllabus

Multimodal Representation Learning for Vision and Language - Kai-Wei Chang (UCLA)


Taught by

Center for Language & Speech Processing(CLSP), JHU

Related Courses

2D image processing
Higher School of Economics via Coursera
3D Reconstruction - Multiple Viewpoints
Columbia University via Coursera
3D Reconstruction - Single Viewpoint
Columbia University via Coursera
AI-900: Microsoft Certified Azure AI Fundamentals
A Cloud Guru
TensorFlow Developer Certificate Exam Prep
A Cloud Guru