YoVDO

Computer Vision and Language Models: Bridging the Modality Gap

Offered By: Data Science Dojo via YouTube

Tags

Computer Vision Courses Prompt Engineering Courses AI Agents Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the transformative impact of large language models (LLMs) on computer vision in this 40-minute talk by Jacob Marks. Gain insights into key LLM-centered projects like VisProg, ViperGPT, VoxelGPT, and HuggingGPT that are revolutionizing the field. Learn about the challenges and lessons from building VoxelGPT, and discover practical tips for domain-specific prompt engineering. Understand how text-only LLMs can achieve remarkable success in visual tasks through prompting and tool use. Delve into topics such as unimodal and multimodal tasks, bridging the modality gap, and the role of FiftyOne in building bridges between modalities. Explore the potential of agents to acquire skills and the ongoing role of humans in this evolving landscape. Conclude with a forward-looking perspective on the future of LLMs in computer vision.

Syllabus

– Introduction
– GPT 4
– What is computer vision
– Unimodel tasks
– multimodel tasks
– Bridging modality gap
– Building bridges with FiftyOne
– Agents can acquire skills
– Role of humans


Taught by

Data Science Dojo

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Computational Photography
Georgia Institute of Technology via Coursera
Einführung in Computer Vision
Technische Universität München (Technical University of Munich) via Coursera
Introduction to Computer Vision
Georgia Institute of Technology via Udacity