Unlocking Reasoning in Large Language Models - Conf42 ML 2023
Offered By: Conf42 via YouTube
Course Description
Overview
Explore the intricacies of reasoning in large language models through this comprehensive conference talk. Delve into various techniques for eliciting and measuring reasoning abilities, including chain of thought prompting, program-aided language models, and plan-and-solve prompting. Discover innovative approaches like self-taught reasoners, specializing smaller models for multi-step reasoning, and iterative prompting methods. Learn about advanced concepts such as tool usage, the REACT framework, and the Chameleon model. Gain valuable insights into the current state and future potential of reasoning capabilities in AI language models, with practical examples and further reading recommendations provided.
Syllabus
intro
preface
about logesh
agenda
what is reasoning?
how is reasoning measured in the literature?
eliciting reasoning
chain of thought prompting and self consistency
program-aided language models
plan-and-solve prompting
star: self-taught reasoner bootstrapping reasoning with reasoning
specializing smaller language models towards multi-step reasoning
distilling step-by-step
recursive and iterative prompting
least-to-most prompting
plan, eliminate, and track
describe, explain, plan and select
tool usage
react: reason and act
chameleon
acknowledgement & further reading
Taught by
Conf42
Related Courses
Introduction to LogicStanford University via Coursera Think Again: How to Reason and Argue
Duke University via Coursera Public Speaking
University of Washington via edX Artificial Intelligence: Knowledge Representation And Reasoning
Indian Institute of Technology Madras via Swayam APĀ® Psychology - Course 3: How the Mind Works
The University of British Columbia via edX