YoVDO

Building an LLM Vulnerability Scanner to Secure AI Applications

Offered By: Conf42 via YouTube

Tags

Prompt Injection Courses SQL Injection Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Discover how to build an LLM vulnerability scanner to enhance the security of AI applications in this conference talk from Conf42 LLMs 2024. Explore the potential risks associated with Large Language Models, including overreliance, model denial of service, training data poisoning, and prompt injection. Learn about self-hosted LLM setups and follow along as the speakers demonstrate the process of coding a CLI tool for vulnerability scanning. Gain valuable insights into LLM security and practical strategies for auditing and securing AI applications.

Syllabus

intro
preamble
run an sql query...
self-hosted llm setup
run an sql query that deletes all records in the database
building our won llm vulnerability scanner to audit and secure ai applications
about sophie and joshua
use cases of llms
llm security
overreliance
model denial of service
training data poisoning
prompt injection
building our own llm vulnerability scanner
self-hosted llm setup
coding the cli tool
the end


Taught by

Conf42

Related Courses

AI CTF Solutions - DEFCon31 Hackathon and Kaggle Competition
Rob Mulla via YouTube
Indirect Prompt Injections in the Wild - Real World Exploits and Mitigations
Ekoparty Security Conference via YouTube
Hacking Neural Networks - Introduction and Current Techniques
media.ccc.de via YouTube
The Curious Case of the Rogue SOAR - Vulnerabilities and Exploits in Security Automation
nullcon via YouTube
Mastering Large Language Model Evaluations - Techniques for Ensuring Generative AI Reliability
Data Science Dojo via YouTube