AI for Good - Detecting Harmful Content at Scale
Offered By: MLOps.community via YouTube
Course Description
Overview
Explore the challenges and solutions of detecting harmful content at scale in this 51-minute podcast episode featuring Matar Haller, VP of Data & AI at ActiveFence. Dive into the complexities of online platform abuse, including brand and legal risks, user experience impacts, and the blurred line between online and offline harm. Learn about AI-driven content moderation, optimizing speed and accuracy, cultural sensitivity in AI training, and continuous adaptation to evolving threats. Discover strategies for testing and deploying machine learning models, monitoring hallucinations in transformer models, and balancing moderation efforts. Gain insights into improving production code quality and addressing AI detection concerns in the ever-changing landscape of online content moderation.
Syllabus
[] Matar's preferred coffee
[] Takeaways
[] The talk that stood out
[] Online hate speech challenges
[] Evaluate harmful media API
[] Content moderation: AI models
[] Optimizing speed and accuracy
[] Cultural reference AI training
[] Functional Tests
[] Continuous adaptation of AI
[] AI detection concerns
[] Fine-Tuned vs Off-the-Shelf
[] Monitoring Transformer Model Hallucinations
[] Auditing process ensures accuracy
[] Testing strategies for ML
[] Modeling hate speech deployment
[] Improving production code quality
[] Finding balance in Moderation
[] Model's expertise: Cultural Sensitivity
[] Wrap up
Taught by
MLOps.community
Related Courses
Convolutions for Text Classification with KerasCoursera Project Network via Coursera How Technology is Shaping Democracy and the 2020 Election
Stanford University via Coursera Microsoft Azure Cognitive Services: Content Moderator
Pluralsight Machine Learning and Microsoft Cognitive Services
Pluralsight Deep Dive on Amazon Rekognition: Building Computer Visions Based Smart Applications (Italian)
Amazon Web Services via AWS Skill Builder