Test Time Compute: Verifiers and Parallel Sampling - Part 2
Offered By: Trelis Research via YouTube
Course Description
Overview
Syllabus
Sampling and Verification
Training Compute vs Test Time Compute
Part 1 Recap: Sampling and Chain of Thought
Video Overview: Parallel Sampling and Filtering with Verifiers
How to sample multiple answers in parallel
Verifier Methods
Improving verifiers with fine-tuning or prompt optimisation
Output verifiers versus process verifiers
Majority Voting and Monte Carlo MCTS
Notebook Setup - Trelis.com/advanced-inference
Installation of vLLM with guided decoding
Loading Llama 3.2 1B as opposed to 3B in Part 1
Baseline Single-shot approach
Parallel sampling approach Pass@n / perfect verifier
Parallel sampling with a voting verifier using vLLM guided decoding
Prompt optimisation for verifiers
Parallel sampling with a scoring verifier 1-10
Parallel sampling with a binary true/false scoring verifier
Llama 3.2 1B Results
Literature Review Let’s Verify Step by Step, Large Language Monkeys, Are more LLM calls all you need? Tree of Thought
Resources
Taught by
Trelis Research
Related Courses
Finetuning, Serving, and Evaluating Large Language Models in the WildOpen Data Science via YouTube Cloud Native Sustainable LLM Inference in Action
CNCF [Cloud Native Computing Foundation] via YouTube Optimizing Kubernetes Cluster Scaling for Advanced Generative Models
Linux Foundation via YouTube LLaMa for Developers
LinkedIn Learning Scaling Video Ad Classification Across Millions of Classes with GenAI
Databricks via YouTube