Localization vs. Semantics: Visual Representations in Unimodal and Multimodal Models
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Explore the comparative analysis of visual representations in vision-and-language models versus vision-only models in this 10-minute conference talk from EACL 2024. Delve into the research conducted by Zhuowan Li from the Center for Language & Speech Processing at JHU, which probes a wide range of tasks to assess the quality of learned representations. Discover intriguing findings that suggest vision-and-language models excel in label prediction tasks like object and attribute prediction, while vision-only models demonstrate superior performance in dense prediction tasks requiring more localized information. Gain insights into the role of language in visual learning and obtain an empirical guide for various pretrained models, contributing to the ongoing discussion about the effectiveness of joint learning paradigms in understanding individual modalities.
Syllabus
Localization vs. Semantics: Visual Representations in Unimodal and Multimodal Models - EACL 2024
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
中级汉语语法 | Intermediate Chinese GrammarPeking University via edX Miracles of Human Language: An Introduction to Linguistics
Leiden University via Coursera Introduction to Natural Language Processing
University of Michigan via Coursera Linguaggio, identità di genere e lingua italiana
Ca' Foscari University of Venice via EduOpen Natural Language Processing
Indian Institute of Technology, Kharagpur via Swayam