LLaMA2 for Multilingual Fine Tuning
Offered By: Sam Witteveen via YouTube
Course Description
Overview
Explore multilingual fine-tuning capabilities of various language models in this informative video. Delve into the LLaMA 2 paper before diving into hands-on code demonstrations. Compare the performance of LLaMA 2, Bloom, GLM2-6B, and MT5 models for multilingual tasks. Discover the potential of the open-sourced RedPajama-INCITE 7B Base model as an alternative to LLaMA. Gain insights into the strengths and limitations of each model for multilingual applications through practical examples and analysis.
Syllabus
Intro
LLaMA 2 Paper
Code Time
LLaMA 2
Bloom
GLM2-6B
MT5
Open Sourced LLaMA Model RedPajama-INCITE 7B Base
Taught by
Sam Witteveen
Related Courses
Google BARD and ChatGPT AI for Increased ProductivityUdemy Bringing LLM to the Enterprise - Training From Scratch or Just Fine-Tune With Cerebras-GPT
Prodramp via YouTube Generative AI and Long-Term Memory for LLMs
James Briggs via YouTube Extractive Q&A With Haystack and FastAPI in Python
James Briggs via YouTube OpenAssistant First Models Are Here! - Open-Source ChatGPT
Yannic Kilcher via YouTube