LlamaGuard 7B: Input-Output Safeguard Model for Data Science and Machine Learning
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Explore LlamaGuard, a fine-tuned version of Meta's Llama2 7B model designed to detect unsafe content in both input and output of large language models. Learn about this innovative safeguard mechanism in a 25-minute video that delves into the project's objectives and implementation. Access the accompanying Jupyter notebook for hands-on experience with the LlamaGuard model, gaining practical insights into its application in data science and machine learning contexts.
Syllabus
LlamaGuard 7B, Input-Output Safeguard model #datascience #machinelearning
Taught by
The Machine Learning Engineer
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent