Gentle Introduction to LLM Security - Top 10 Risks
Offered By: Conf42 via YouTube
Course Description
Overview
Explore a comprehensive overview of the top 10 security risks associated with Large Language Models (LLMs) in this 19-minute conference talk from Conf42 LLMs 2024. Delve into crucial topics such as jailbreaking, prompt injection, data poisoning, denial of service attacks, model theft, data leakage, insecure outputs, plugin vulnerabilities, agent insecurities, and supply chain risks. Gain valuable insights into the LLM security risk landscape and discover essential resources for further learning. Perfect for developers, security professionals, and anyone interested in understanding the potential vulnerabilities of AI language models.
Syllabus
intro
preamble
eugene neelou
llm security top 10 risks
jailbreak
prompt injection
data poisoning
denial of service
model theft
data leakage
insecure output
insecure plugin
insecure agent
insecure supply chain
llm security risk map
resources
Taught by
Conf42
Related Courses
AI Security Engineering - Modeling - Detecting - Mitigating New VulnerabilitiesRSA Conference via YouTube Trustworthy Machine Learning: Challenges and Frameworks
USENIX Enigma Conference via YouTube Smashing the ML Stack for Fun and Lawsuits
Black Hat via YouTube Learning Under Data Poisoning
Simons Institute via YouTube Understanding Security Threats Against Machine - Deep Learning Applications
Devoxx via YouTube