AI Safety, Security, and Play - Episode 137
Offered By: DevSecCon via YouTube
Course Description
Overview
Explore AI safety and security in this 52-minute DevSecCon episode featuring David Haber, co-founder of Lakera.ai and creator of Gandalf. Dive into prompt injections, AI behavior, risks, and vulnerabilities. Learn about data poisoning, protections, and the motivation behind Gandalf's creation. Examine two approaches to informing Large Language Models (LLMs) about sensitive data, discussing their pros and cons. Gain insights into LLM security and the importance of considering model-specific knowledge. Discover expert perspectives on AI safety, security, and play from David Haber in this informative discussion.
Syllabus
Ep. #137, AI Safety, Security, and Play
Taught by
DevSecCon
Related Courses
AI Security Engineering - Modeling - Detecting - Mitigating New VulnerabilitiesRSA Conference via YouTube Trustworthy Machine Learning: Challenges and Frameworks
USENIX Enigma Conference via YouTube Smashing the ML Stack for Fun and Lawsuits
Black Hat via YouTube Learning Under Data Poisoning
Simons Institute via YouTube Understanding Security Threats Against Machine - Deep Learning Applications
Devoxx via YouTube