Scaling Laws of Formal Reasoning in Large Language Models - Lecture 7
Offered By: MICDE University of Michigan via YouTube
Course Description
Overview
Explore the critical advancements in improving Large Language Models' (LLMs) formal reasoning abilities for scientific applications in this 28-minute conference talk. Delve into two key research directions: the introduction of Llemma, a foundation model specifically designed for mathematics, and the concept of "easy-to-hard" generalization. Learn how Llemma leverages the extensive Proofpile II corpus to enhance the relationship between training compute and reasoning ability, resulting in significant accuracy improvements. Discover the potential of training strong evaluator models to facilitate generalization to more complex problems. Gain insights into the importance of scaling high-quality data collection and further algorithmic development for enhancing formal reasoning capabilities in LLMs.
Syllabus
07. SciFM24 Sean Welleck: Scaling Laws of Formal Reasoning
Taught by
MICDE University of Michigan
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Artificial Intelligence for Robotics
Stanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent