YoVDO

OWASP Top 10 Security Risks for Large Language Models

Offered By: DevSecCon via YouTube

Tags

OWASP Top 10 Courses Generative AI Courses Sandboxing Courses Data Poisoning Courses Prompt Injection Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the OWASP Top 10 security risks associated with Large Language Models (LLMs) in this comprehensive 39-minute DevSecCon talk. Delve into the rapidly evolving world of Generative AI and its potential impact on various industries. Learn about critical concepts such as prompt injection, data leakage, sandboxing, and insufficient AI alignment. Gain insights into best practices, real-life examples, and resources for managing LLM applications securely. Discover the challenges and opportunities presented by AI-generated content and compare AI capabilities with human resources. Equip yourself with essential knowledge for risk management and building robust security controls in the era of LLMs.

Syllabus

Introduction
What is an LLM
No one has all the answers
Oauth Top 10
Prompt Injection
Do Anything Mode
Bug Bounty Program
Data Leakage
Sandboxing
Running code
SSRF vulnerability
LLM generated content
Insufficient AI alignment
Data poisoning
Security challenges
Best practices
Real life examples
Resources
AI vs Human Resources


Taught by

DevSecCon

Related Courses

Learning the OWASP Top 10
LinkedIn Learning
OWASP Top 10: #5 Broken Access Control and #6 Security Misconfiguration
LinkedIn Learning
Advanced Cyber Security Training: OWASP Top 10 and Web Application Fundamentals
EC-Council via FutureLearn
Pentesting with Daniel Slater (Ethical Hacking/Web Security)
Udemy
OWASP Top 10: API Security Playbook
Pluralsight