YoVDO

V-JEPA: Revisiting Feature Prediction for Learning Visual Representations from Video

Offered By: Yannic Kilcher via YouTube

Tags

Computer Vision Courses Artificial Intelligence Courses Machine Learning Courses Deep Learning Courses Unsupervised Learning Courses Neural Networks Courses Representation Learning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore an in-depth explanation of V-JEPA (Video Joint Embedding Predictive Architecture), a novel method for unsupervised representation learning from video data. Delve into the predictive feature principle, the original JEPA architecture, and the V-JEPA concept and architecture. Examine experimental results and qualitative evaluation through decoding. Learn how this approach, developed by Meta AI researchers, achieves impressive performance on both motion and appearance-based tasks using only latent representation prediction as an objective function. Gain insights into the potential of this technique for advancing unsupervised learning in computer vision and its implications for future AI developments.

Syllabus

- Intro
- Predictive Feature Principle
- Weights & Biases course on Structured LLM Outputs
- The original JEPA architecture
- V-JEPA Concept
- V-JEPA Architecture
- Experimental Results
- Qualitative Evaluation via Decoding


Taught by

Yannic Kilcher

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Computational Photography
Georgia Institute of Technology via Coursera
Einführung in Computer Vision
Technische Universität München (Technical University of Munich) via Coursera
Introduction to Computer Vision
Georgia Institute of Technology via Udacity