LLaMA2 for Multilingual Fine Tuning
Offered By: Sam Witteveen via YouTube
Course Description
Overview
Explore multilingual fine-tuning capabilities of various language models in this informative video. Delve into the LLaMA 2 paper before diving into hands-on code demonstrations. Compare the performance of LLaMA 2, Bloom, GLM2-6B, and MT5 models for multilingual tasks. Discover the potential of the open-sourced RedPajama-INCITE 7B Base model as an alternative to LLaMA. Gain insights into the strengths and limitations of each model for multilingual applications through practical examples and analysis.
Syllabus
Intro
LLaMA 2 Paper
Code Time
LLaMA 2
Bloom
GLM2-6B
MT5
Open Sourced LLaMA Model RedPajama-INCITE 7B Base
Taught by
Sam Witteveen
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent