LLMs Fine Tuning and Inferencing Using ONNX Runtime - Workshop
Offered By: Linux Foundation via YouTube
Course Description
Overview
Explore an end-to-end example of fine-tuning and inferencing Large Language Models (LLMs) using ONNX Runtime in this comprehensive workshop. Dive into the process of adapting latest LLMs like LLAMA, Mistral, and Zephyr for real-world applications. Learn how to leverage existing technologies for quick and simple setup. Gain hands-on experience using AzureML with the Azure Container for PyTorch (ACPT) environment, which includes cutting-edge tools like Deepspeed for distributed training and LoRA for efficient fine-tuning. Discover the capabilities of ONNX Runtime as a cross-platform inference and training machine-learning accelerator, understanding its potential for faster execution and portability across frameworks and devices.
Syllabus
Workshop: LLMs Fine Tuning and Inferencing Using ON... Abhishek Jindal, Sunghoon Choi & Kshama Pawar
Taught by
Linux Foundation
Tags
Related Courses
TensorFlow: Working with NLPLinkedIn Learning Introduction to Video Editing - Video Editing Tutorials
Great Learning via YouTube HuggingFace Crash Course - Sentiment Analysis, Model Hub, Fine Tuning
Python Engineer via YouTube GPT3 and Finetuning the Core Objective Functions - A Deep Dive
David Shapiro ~ AI via YouTube How to Build a Q&A AI in Python - Open-Domain Question-Answering
James Briggs via YouTube