YoVDO

LLM Security: Practical Protection for AI Developers

Offered By: Databricks via YouTube

Tags

Fine-Tuning Courses Data Poisoning Courses Retrieval Augmented Generation Courses Prompt Injection Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore practical strategies for securing Large Language Models (LLMs) in AI development during this 29-minute conference talk. Delve into the security risks associated with utilizing open-source LLMs, particularly when handling proprietary data through fine-tuning or retrieval-augmented generation (RAG). Examine real-world examples of top LLM security risks and learn about emerging standards from OWASP, NIST, and MITRE. Discover how a validation framework can empower developers to innovate while safeguarding against indirect prompt injection, prompt extraction, data poisoning, and supply chain risks. Gain insights from Yaron Singer, CEO & Co-Founder of Robust Intelligence, on deploying LLMs securely without hindering innovation.

Syllabus

LLM Security: Practical Protection for AI Developers


Taught by

Databricks

Related Courses

TensorFlow: Working with NLP
LinkedIn Learning
Introduction to Video Editing - Video Editing Tutorials
Great Learning via YouTube
HuggingFace Crash Course - Sentiment Analysis, Model Hub, Fine Tuning
Python Engineer via YouTube
GPT3 and Finetuning the Core Objective Functions - A Deep Dive
David Shapiro ~ AI via YouTube
How to Build a Q&A AI in Python - Open-Domain Question-Answering
James Briggs via YouTube