YoVDO

LLM Security: Practical Protection for AI Developers

Offered By: Databricks via YouTube

Tags

Fine-Tuning Courses Data Poisoning Courses Retrieval Augmented Generation Courses Prompt Injection Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore practical strategies for securing Large Language Models (LLMs) in AI development during this 29-minute conference talk. Delve into the security risks associated with utilizing open-source LLMs, particularly when handling proprietary data through fine-tuning or retrieval-augmented generation (RAG). Examine real-world examples of top LLM security risks and learn about emerging standards from OWASP, NIST, and MITRE. Discover how a validation framework can empower developers to innovate while safeguarding against indirect prompt injection, prompt extraction, data poisoning, and supply chain risks. Gain insights from Yaron Singer, CEO & Co-Founder of Robust Intelligence, on deploying LLMs securely without hindering innovation.

Syllabus

LLM Security: Practical Protection for AI Developers


Taught by

Databricks

Related Courses

Pinecone Vercel Starter Template and RAG - Live Code Review Part 2
Pinecone via YouTube
Will LLMs Kill Search? The Future of Information Retrieval
Aleksa Gordić - The AI Epiphany via YouTube
RAG But Better: Rerankers with Cohere AI - Improving Retrieval Pipelines
James Briggs via YouTube
Advanced RAG - Contextual Compressors and Filters - Lecture 4
Sam Witteveen via YouTube
LangChain Multi-Query Retriever for RAG - Advanced Technique for Broader Vector Space Search
James Briggs via YouTube