LaMini-LM - Mini Models Maxi Data
Offered By: Sam Witteveen via YouTube
Course Description
Overview
Explore the creation of LaMini-LM, a collection of distilled language models trained on large-scale instructions, in this informative video. Dive into the key ideas, dataset creation process, and model training methodology outlined in the research paper. Examine the diverse range of models trained, including Neo 1.3B, GPT1.5B, and Flan-T5-783M. Learn about the Hugging Face dataset used and witness demonstrations of prompts on ChatGPT. Gain practical insights through code examples and access provided Colab notebooks for hands-on experimentation with these mini models trained on maxi data.
Syllabus
Intro
Key Idea
Diagram
Dataset
Hugging Face Dataset
Trained on a lot of Models
Paper
Prompts on ChatGPT
Code Time
Taught by
Sam Witteveen
Related Courses
Google BARD and ChatGPT AI for Increased ProductivityUdemy Bringing LLM to the Enterprise - Training From Scratch or Just Fine-Tune With Cerebras-GPT
Prodramp via YouTube Generative AI and Long-Term Memory for LLMs
James Briggs via YouTube Extractive Q&A With Haystack and FastAPI in Python
James Briggs via YouTube OpenAssistant First Models Are Here! - Open-Source ChatGPT
Yannic Kilcher via YouTube