YoVDO

Training Llama 2 in Julia - Scaling Large Language Models

Offered By: The Julia Programming Language via YouTube

Tags

Julia Courses Neural Networks Courses Parallel Computing Courses GPU Computing Courses Model Training Courses Fine-Tuning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Discover how to train large language models like Llama(2) using Julia in this JuliaCon 2024 conference talk. Learn about scaling neural network training to multiple GPUs simultaneously using Dagger.jl and Flux.jl. Explore the challenges and solutions in implementing a parallel training pipeline for LLMs, including model and data description, job setup, and performance scaling. Gain insights into fine-tuning pre-trained models with techniques like Low Rank Adaptation (LoRA) for specialized tasks. Understand the components required for efficient large-scale GPU workloads in the Julia ecosystem and the potential applications for various model types beyond LLMs.

Syllabus

Train a Llama(2) in Julia! | Gandhi, P Samaroo | JuliaCon 2024


Taught by

The Julia Programming Language

Related Courses

Intro to Parallel Programming
Nvidia via Udacity
Introduction to Linear Models and Matrix Algebra
Harvard University via edX
Введение в параллельное программирование с использованием OpenMP и MPI
Tomsk State University via Coursera
Supercomputing
Partnership for Advanced Computing in Europe via FutureLearn
Fundamentals of Parallelism on Intel Architecture
Intel via Coursera