Language Models as Zero-Shot Planners - Extracting Actionable Knowledge for Embodied Agents
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore a comprehensive video lecture and interview on using large language models as zero-shot planners for embodied agents. Delve into the VirtualHome environment and learn how to translate unstructured language model outputs into structured grammar for interactive environments. Discover techniques for decomposing high-level tasks into actionable steps without additional training. Examine the challenges of plan evaluation and execution, and understand the contributions of this research. Gain insights from the interview with first author Wenlong Huang, covering topics such as model size impact, output refinement, and the effectiveness of Codex. Analyze experimental results and consider future implications for extracting actionable knowledge from language models in embodied AI applications.
Syllabus
- Intro & Overview
- The VirtualHome environment
- The problem of plan evaluation
- Contributions of this paper
- Start of interview
- How to use language models with environments?
- What does model size matter?
- How to fix the large models' outputs?
- Possible improvements to the translation procedure
- Why does Codex perform so well?
- Diving into experimental results
- Future outlook
Taught by
Yannic Kilcher
Related Courses
Business Considerations for 5G with Edge, IoT, and AILinux Foundation via edX FinTech for Finance and Business Leaders
ACCA via edX AI-900: Microsoft Certified Azure AI Fundamentals
A Cloud Guru AWS Certified Machine Learning - Specialty (LA)
A Cloud Guru Azure AI Components and Services
A Cloud Guru