YoVDO

LlamaGuard 7B: Input-Output Safeguard Model for Data Science and Machine Learning

Offered By: The Machine Learning Engineer via YouTube

Tags

Machine Learning Courses Content Moderation Courses Fine-Tuning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore LlamaGuard, a fine-tuned version of Meta's Llama2 7B model designed to detect unsafe content in both input and output of large language models. Learn about this innovative safeguard mechanism in a 25-minute video that delves into the project's objectives and implementation. Access the accompanying Jupyter notebook for hands-on experience with the LlamaGuard model, gaining practical insights into its application in data science and machine learning contexts.

Syllabus

LlamaGuard 7B, Input-Output Safeguard model #datascience #machinelearning


Taught by

The Machine Learning Engineer

Related Courses

TensorFlow: Working with NLP
LinkedIn Learning
Introduction to Video Editing - Video Editing Tutorials
Great Learning via YouTube
HuggingFace Crash Course - Sentiment Analysis, Model Hub, Fine Tuning
Python Engineer via YouTube
GPT3 and Finetuning the Core Objective Functions - A Deep Dive
David Shapiro ~ AI via YouTube
How to Build a Q&A AI in Python - Open-Domain Question-Answering
James Briggs via YouTube