YoVDO

AI Safety, Security, and Play - Episode 137

Offered By: DevSecCon via YouTube

Tags

Artificial Intelligence Courses Data Poisoning Courses Prompt Injection Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore AI safety and security in this 52-minute DevSecCon episode featuring David Haber, co-founder of Lakera.ai and creator of Gandalf. Dive into prompt injections, AI behavior, risks, and vulnerabilities. Learn about data poisoning, protections, and the motivation behind Gandalf's creation. Examine two approaches to informing Large Language Models (LLMs) about sensitive data, discussing their pros and cons. Gain insights into LLM security and the importance of considering model-specific knowledge. Discover expert perspectives on AI safety, security, and play from David Haber in this informative discussion.

Syllabus

Ep. #137, AI Safety, Security, and Play


Taught by

DevSecCon

Related Courses

AI Security Engineering - Modeling - Detecting - Mitigating New Vulnerabilities
RSA Conference via YouTube
Trustworthy Machine Learning: Challenges and Frameworks
USENIX Enigma Conference via YouTube
Smashing the ML Stack for Fun and Lawsuits
Black Hat via YouTube
Learning Under Data Poisoning
Simons Institute via YouTube
Understanding Security Threats Against Machine - Deep Learning Applications
Devoxx via YouTube