The Best Tiny Language Models - Performance, Fine-tuning, and Function-calling
Offered By: Trelis Research via YouTube
Course Description
Overview
Explore the world of small language models in this comprehensive video lecture. Learn about the benefits of tiny LLMs, compare the performance of TinyLlama, DeepSeek Coder, and Phi 2, and discover techniques for fine-tuning and function-calling with quantized models. Gain insights into the challenges and tricks of working with tiny models, and find out which ones are considered the best in the field. Access valuable resources, including fine-tuning datasets, one-click LLM templates, and advanced repositories for fine-tuning and inference. Enhance your understanding of small language models and their practical applications in AI development.
Syllabus
Best Small Language Models
Video Overview
Benefits of Tiny LLMs
Fine-tuning and Inference Repo Overviews
Performance Comparison - TinyLlama, DeepSeek Coder and Phi 2
Fine-tuning Tiny Language Models
Function-calling quantized models with llama.cpp
Challenges and Tricks - Function-calling with Tiny Models
What are the best Tiny Language models?
Taught by
Trelis Research
Related Courses
Autogen and Local LLMs Create Realistic Stable Diffusion Model Autonomouslykasukanra via YouTube Fine-Tuning a Local Mistral 7B Model - Step-by-Step Guide
All About AI via YouTube No More Runtime Setup - Bundling, Distributing, Deploying, and Scaling LLMs Seamlessly with Ollama Operator
CNCF [Cloud Native Computing Foundation] via YouTube Running LLMs in the Cloud - Approaches and Best Practices
CNCF [Cloud Native Computing Foundation] via YouTube Running LLMs in the Cloud - Approaches and Best Practices
CNCF [Cloud Native Computing Foundation] via YouTube