LLM Tool Use - GPT4o-mini, Groq, and Llama.cpp
Offered By: Trelis Research via YouTube
Course Description
Overview
Explore advanced techniques for leveraging Language Model (LLM) tools in this comprehensive video tutorial. Dive into function calling methodologies for cheap, fast, local, and enterprise applications. Learn about tool use flow charts, function preparation tips, and code walk-throughs for effective tool integration. Discover how to implement prompt preparation and recursive tool use strategies. Evaluate the performance of various models including GPT4o-mini, Phi-3 Mini, and Llama 3 in zero-shot and fine-tuned scenarios. Gain insights into parallel function calling, low latency tool use with Groq, and local inference on Mac using llama.cpp. Access valuable resources and receive final tool use tips to enhance your LLM tool utilization skills.
Syllabus
Function Calling - Cheap, Fast, Local and Enterprise
Video Overview
Tool Use Flow Chart
Function preparation tips
Resources trelis.com/ADVANCED-inference
Code walk-through for function / tool preparation
Prompt preparation and Recursive tool use
GPT4o-mini tool use performance
Zero shot prompting and Runpod Phi-3 endpoint setup
Phi-3 Mini Zero Shot Performance
Parallel function calling with Phi-3
Low latency tool use with Groq - Zero shot
Groq Llama 3 8B Zero Shot Tool Use Performance
Groq Llama 3 8B Fine-tune Performance
Groq Llama 3 70B Fine-tune Performance
Local Phi-3 Inference on a Mac with llama.cpp
Final Tool Use Tips
Resources
Taught by
Trelis Research
Related Courses
Prompt Engineering GuideIndependent AI Mastery: Ultimate Crash Course in Prompt Engineering for Large Language Models
Data Science Dojo via YouTube Essentials of Prompt Engineering (Japanese) (Sub) 日本語字幕版
Amazon Web Services via AWS Skill Builder Essentials of Prompt Engineering (Simplified Chinese)
Amazon Web Services via AWS Skill Builder Essentials of Prompt Engineering (Japanese)
Amazon Web Services via AWS Skill Builder