Securing LLM-Powered Applications - Overcoming Security and Privacy Challenges
Offered By: Devoxx via YouTube
Course Description
Overview
Explore the security and privacy challenges of LLM-powered applications in this 51-minute conference talk from Devoxx. Delve into common issues like prompt injection, key leakage, and misuse of private customer data for model training. Gain insights on legal restrictions and understand how general security vulnerabilities can impact LLM behavior and outcomes. Acquire a comprehensive overview of potential vulnerabilities, strategies for ensuring data privacy compliance, and best practices for developing secure applications that leverage Large Language Models. Learn to navigate the exciting possibilities of AI in applications while effectively mitigating associated risks.
Syllabus
Securing LLM-Powered Applications: Overcoming Security and Privacy Challenges by Brian Vermeer, Lize
Taught by
Devoxx
Related Courses
AI CTF Solutions - DEFCon31 Hackathon and Kaggle CompetitionRob Mulla via YouTube Indirect Prompt Injections in the Wild - Real World Exploits and Mitigations
Ekoparty Security Conference via YouTube Hacking Neural Networks - Introduction and Current Techniques
media.ccc.de via YouTube The Curious Case of the Rogue SOAR - Vulnerabilities and Exploits in Security Automation
nullcon via YouTube Mastering Large Language Model Evaluations - Techniques for Ensuring Generative AI Reliability
Data Science Dojo via YouTube