LLMs Fine Tuning and Inferencing Using ONNX Runtime - Workshop
Offered By: Linux Foundation via YouTube
Course Description
Overview
Explore an end-to-end example of fine-tuning and inferencing Large Language Models (LLMs) using ONNX Runtime in this comprehensive workshop. Dive into the process of adapting latest LLMs like LLAMA, Mistral, and Zephyr for real-world applications. Learn how to leverage existing technologies for quick and simple setup. Gain hands-on experience using AzureML with the Azure Container for PyTorch (ACPT) environment, which includes cutting-edge tools like Deepspeed for distributed training and LoRA for efficient fine-tuning. Discover the capabilities of ONNX Runtime as a cross-platform inference and training machine-learning accelerator, understanding its potential for faster execution and portability across frameworks and devices.
Syllabus
Workshop: LLMs Fine Tuning and Inferencing Using ON... Abhishek Jindal, Sunghoon Choi & Kshama Pawar
Taught by
Linux Foundation
Tags
Related Courses
How to Do Stable Diffusion LORA Training by Using Web UI on Different ModelsSoftware Engineering Courses - SE Courses via YouTube MicroPython & WiFi
Kevin McAleer via YouTube Building a Wireless Community Sensor Network with LoRa
Hackaday via YouTube ComfyUI - Node Based Stable Diffusion UI
Olivio Sarikas via YouTube AI Masterclass for Everyone - Stable Diffusion, ControlNet, Depth Map, LORA, and VR
Hugh Hou via YouTube