YoVDO

LLM Security: Practical Protection for AI Developers

Offered By: Databricks via YouTube

Tags

Fine-Tuning Courses Data Poisoning Courses Retrieval Augmented Generation Courses Prompt Injection Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore practical strategies for securing Large Language Models (LLMs) in AI development during this 29-minute conference talk. Delve into the security risks associated with utilizing open-source LLMs, particularly when handling proprietary data through fine-tuning or retrieval-augmented generation (RAG). Examine real-world examples of top LLM security risks and learn about emerging standards from OWASP, NIST, and MITRE. Discover how a validation framework can empower developers to innovate while safeguarding against indirect prompt injection, prompt extraction, data poisoning, and supply chain risks. Gain insights from Yaron Singer, CEO & Co-Founder of Robust Intelligence, on deploying LLMs securely without hindering innovation.

Syllabus

LLM Security: Practical Protection for AI Developers


Taught by

Databricks

Related Courses

AI Security Engineering - Modeling - Detecting - Mitigating New Vulnerabilities
RSA Conference via YouTube
Trustworthy Machine Learning: Challenges and Frameworks
USENIX Enigma Conference via YouTube
Smashing the ML Stack for Fun and Lawsuits
Black Hat via YouTube
Learning Under Data Poisoning
Simons Institute via YouTube
Understanding Security Threats Against Machine - Deep Learning Applications
Devoxx via YouTube