YoVDO

Detoxification of Large Language Models Using TrustyAI Detoxify and HuggingFace SFTTrainer

Offered By: DevConf via YouTube

Tags

Machine Learning Courses Supervised Fine-Tuning Courses Prompt Tuning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the process of detoxifying large language models in this DevConf.US 2024 conference talk. Learn how to leverage TrustyAI Detoxify, an open-source library for scoring and rephrasing toxic content, in conjunction with HuggingFace's Supervised Finetuning Trainer (SFT) to optimize the detoxification process. Discover the challenges of curating high-quality, human-aligned training data and how TrustyAI Detoxify can be used to rephrase toxic content for supervised fine-tuning. Gain insights into the capabilities of TrustyAI Detoxify and its practical application in improving the ethical performance of language models. Follow along as speaker Christina Xu demonstrates the integration of these tools to streamline the detoxification protocol and create more responsible AI systems.

Syllabus

Intro
Motivation
Objectives
PFT
Solution
Evaluation
Questions


Taught by

DevConf

Related Courses

Big Self-Supervised Models Are Strong Semi-Supervised Learners
Yannic Kilcher via YouTube
A Transformer-Based Framework for Multivariate Time Series Representation Learning
Launchpad via YouTube
Inside ChatGPT- Unveiling the Training Process of OpenAI's Language Model
Krish Naik via YouTube
Fine Tune GPT-3.5 Turbo
Data Science Dojo via YouTube
Yi 34B: The Rise of Powerful Mid-Sized Models - Base, 200k, and Chat
Sam Witteveen via YouTube