YoVDO

LlamaGuard 7B: Input-Output Safeguard Model for Data Science and Machine Learning

Offered By: The Machine Learning Engineer via YouTube

Tags

Machine Learning Courses Content Moderation Courses Fine-Tuning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore LlamaGuard, a fine-tuned version of Meta's Llama2 7B model designed to detect unsafe content in both input and output of large language models. Learn about this innovative safeguard mechanism in a 25-minute video that delves into the project's objectives and implementation. Access the accompanying Jupyter notebook for hands-on experience with the LlamaGuard model, gaining practical insights into its application in data science and machine learning contexts.

Syllabus

LlamaGuard 7B, Input-Output Safeguard model #datascience #machinelearning


Taught by

The Machine Learning Engineer

Related Courses

Convolutions for Text Classification with Keras
Coursera Project Network via Coursera
How Technology is Shaping Democracy and the 2020 Election
Stanford University via Coursera
Microsoft Azure Cognitive Services: Content Moderator
Pluralsight
Machine Learning and Microsoft Cognitive Services
Pluralsight
Deep Dive on Amazon Rekognition: Building Computer Visions Based Smart Applications (Italian)
Amazon Web Services via AWS Skill Builder