YoVDO

This and That: Language-Gesture Conditioning of Visual Generative Models for Robotics - Tech Talk #7

Offered By: HuggingFace via YouTube

Tags

Robotics Courses Machine Learning Courses Computer Vision Courses Neural Networks Courses Human-Robot Interaction Courses Gesture Recognition Courses Generative Models Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive tech talk on "This&That" presented by Jeong Joon Park as part of the Lerobot Tech Talk series organized by HuggingFace. Delve into the intersection of machine learning and robotics, covering topics such as visual generative models, language-gesture conditioning, and the application of generated video plans to robot actions. Begin with an introduction and recap of machine learning in robotics, then progress through detailed discussions on visual generative models and their conditioning with language and gestures. Learn how these concepts translate into practical robot actions, and conclude with insights into future developments in the field. Engage with the Q&A session to deepen your understanding of this cutting-edge research. Access additional resources, including the research paper and project page, to further expand your knowledge on this innovative approach to robotics and AI.

Syllabus

- Introduction
- Recap on ML for robotics to date
- Visual generative models
- Language-gesture conditioning of visual generative models
- Generated video plans to robot action
- Conclusion and future work
- Q & A


Taught by

Hugging Face

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent