YoVDO

Computer Vision and Language Models: Bridging the Modality Gap

Offered By: Data Science Dojo via YouTube

Tags

Computer Vision Courses Prompt Engineering Courses AI Agents Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the transformative impact of large language models (LLMs) on computer vision in this 40-minute talk by Jacob Marks. Gain insights into key LLM-centered projects like VisProg, ViperGPT, VoxelGPT, and HuggingGPT that are revolutionizing the field. Learn about the challenges and lessons from building VoxelGPT, and discover practical tips for domain-specific prompt engineering. Understand how text-only LLMs can achieve remarkable success in visual tasks through prompting and tool use. Delve into topics such as unimodal and multimodal tasks, bridging the modality gap, and the role of FiftyOne in building bridges between modalities. Explore the potential of agents to acquire skills and the ongoing role of humans in this evolving landscape. Conclude with a forward-looking perspective on the future of LLMs in computer vision.

Syllabus

– Introduction
– GPT 4
– What is computer vision
– Unimodel tasks
– multimodel tasks
– Bridging modality gap
– Building bridges with FiftyOne
– Agents can acquire skills
– Role of humans


Taught by

Data Science Dojo

Related Courses

AI Content Creation with DALL-E: Visual SEO Strategy
Coursera Project Network via Coursera
Become an AI-Powered Marketer
Semrush Academy
AI Foundations: Prompt Engineering with ChatGPT
Arizona State University via Coursera
Learn AI Agents
Scrimba
IBM Applied AI
IBM via Coursera