Protecting Online Communities with Azure AI Content Safety
Offered By: Pluralsight
Course Description
Overview
Explore the essentials of Azure AI Content Safety. This course will teach you how to moderate text and image content, and detect harmful or inappropriate material using advanced filtering and prompt shields.
Managing user-generated content to ensure it is safe and appropriate is a critical challenge for many online platforms. Ensuring that content does not contain harmful, violent, or inappropriate material is essential for maintaining a safe and welcoming environment for all users. In this course, Protecting Online Communities with Azure AI Content Safety, you’ll learn to effectively moderate and manage content using Azure AI tools. First, you’ll explore how to create and log in to the AI Content Safety Studio Instance. Next, you’ll discover how to perform text content moderation, including filtering content based on thresholds for hate, violence, sexual content, and self-harm, as well as screening for specific terms using blocklists. Finally, you’ll learn how to moderate image content and use prompt shields to detect indirect attacks like jailbreaks and prompt injections. When you’re finished with this course, you’ll have the skills and knowledge of Azure AI Content Safety needed to ensure your platform’s content is safe, compliant, and welcoming for all users.
Managing user-generated content to ensure it is safe and appropriate is a critical challenge for many online platforms. Ensuring that content does not contain harmful, violent, or inappropriate material is essential for maintaining a safe and welcoming environment for all users. In this course, Protecting Online Communities with Azure AI Content Safety, you’ll learn to effectively moderate and manage content using Azure AI tools. First, you’ll explore how to create and log in to the AI Content Safety Studio Instance. Next, you’ll discover how to perform text content moderation, including filtering content based on thresholds for hate, violence, sexual content, and self-harm, as well as screening for specific terms using blocklists. Finally, you’ll learn how to moderate image content and use prompt shields to detect indirect attacks like jailbreaks and prompt injections. When you’re finished with this course, you’ll have the skills and knowledge of Azure AI Content Safety needed to ensure your platform’s content is safe, compliant, and welcoming for all users.
Syllabus
- Content Moderation with Azure AI Safety 23mins
Taught by
Janani Ravi
Related Courses
Convolutions for Text Classification with KerasCoursera Project Network via Coursera How Technology is Shaping Democracy and the 2020 Election
Stanford University via Coursera Microsoft Azure Cognitive Services: Content Moderator
Pluralsight Machine Learning and Microsoft Cognitive Services
Pluralsight Deep Dive on Amazon Rekognition: Building Computer Visions Based Smart Applications (Italian)
Amazon Web Services via AWS Skill Builder