Training Llama 2 in Julia - Scaling Large Language Models
Offered By: The Julia Programming Language via YouTube
Course Description
Overview
Discover how to train large language models like Llama(2) using Julia in this JuliaCon 2024 conference talk. Learn about scaling neural network training to multiple GPUs simultaneously using Dagger.jl and Flux.jl. Explore the challenges and solutions in implementing a parallel training pipeline for LLMs, including model and data description, job setup, and performance scaling. Gain insights into fine-tuning pre-trained models with techniques like Low Rank Adaptation (LoRA) for specialized tasks. Understand the components required for efficient large-scale GPU workloads in the Julia ecosystem and the potential applications for various model types beyond LLMs.
Syllabus
Train a Llama(2) in Julia! | Gandhi, P Samaroo | JuliaCon 2024
Taught by
The Julia Programming Language
Related Courses
Julia Scientific ProgrammingUniversity of Cape Town via Coursera Julia for Beginners in Data Science
Coursera Project Network via Coursera Linear Regression and Multiple Linear Regression in Julia
Coursera Project Network via Coursera Decision Tree and Random Forest Classification using Julia
Coursera Project Network via Coursera Logistic Regression for Classification using Julia
Coursera Project Network via Coursera