InternLM - A Strong Agentic Model for Math, Reasoning, and Function Calling
Offered By: Sam Witteveen via YouTube
Course Description
Overview
Explore a comprehensive video analysis of InternLM, a large language model focused on math, reasoning, and function calling capabilities. Dive into the model's performance on the Hugging Face leaderboard, examine its GitHub repository, and learn about LMDeploy for efficient deployment. Discover Lagent, InternLM's agent framework, and review the research paper detailing its architecture. Investigate available models and datasets on Hugging Face, and see how to run InternLM on Ollama. Follow along with a hands-on Colab implementation, understanding the chat format and function calling features. Conclude with instructions for running InternLM locally using Ollama, gaining practical insights into this powerful language model's applications and potential.
Syllabus
Intro
Hugging Face Leaderboard
InternLM Github
InternLM: LMDeploy
InternLM: Lagent
InternLM Paper
InternLM Hugging Face Models and Datasets
InternLM on Ollama
Code Time
InternLM Hugging Face Implementation Colab
InternLM Chat Format
InternLM Function Calling
InternLM Running Locally through Ollama
Taught by
Sam Witteveen
Related Courses
The GenAI Stack - From Zero to Database-Backed Support BotDocker via YouTube Ollama Crash Course: Running AI Models Locally Offline on CPU
1littlecoder via YouTube AI Anytime, Anywhere - Getting Started with LLMs on Your Laptop
Docker via YouTube Rust Ollama Tutorial - Interfacing with Ollama API Using ollama-rs
Jeremy Chone via YouTube Ollama: Libraries, Vision Models, and OpenAI Compatibility Updates
Sam Witteveen via YouTube