Unlocking Reasoning in Large Language Models - Conf42 ML 2023
Offered By: Conf42 via YouTube
Course Description
Overview
Explore the intricacies of reasoning in large language models through this comprehensive conference talk. Delve into various techniques for eliciting and measuring reasoning abilities, including chain of thought prompting, program-aided language models, and plan-and-solve prompting. Discover innovative approaches like self-taught reasoners, specializing smaller models for multi-step reasoning, and iterative prompting methods. Learn about advanced concepts such as tool usage, the REACT framework, and the Chameleon model. Gain valuable insights into the current state and future potential of reasoning capabilities in AI language models, with practical examples and further reading recommendations provided.
Syllabus
intro
preface
about logesh
agenda
what is reasoning?
how is reasoning measured in the literature?
eliciting reasoning
chain of thought prompting and self consistency
program-aided language models
plan-and-solve prompting
star: self-taught reasoner bootstrapping reasoning with reasoning
specializing smaller language models towards multi-step reasoning
distilling step-by-step
recursive and iterative prompting
least-to-most prompting
plan, eliminate, and track
describe, explain, plan and select
tool usage
react: reason and act
chameleon
acknowledgement & further reading
Taught by
Conf42
Related Courses
Trying Out Flan 20B with UL2 - Working in Colab with 8-Bit InferenceSam Witteveen via YouTube Master ChatGPT Prompting - Advance Prompt Engineering Techniques for Optimal Results
Data Science Dojo via YouTube Prompt Engineering - Crash Course
Data Science Dojo via YouTube AI Mastery: Ultimate Crash Course in Prompt Engineering for Large Language Models
Data Science Dojo via YouTube Prompt Engineering with Llama 2&3
DeepLearning.AI via Coursera