YoVDO

Logically Securing the Illogically Logical Use of Large Language Models

Offered By: Linux Foundation via YouTube

Tags

Cybersecurity Courses Risk Management Courses Incident Response Courses Access Control Courses Configuration Management Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the critical intersection of security and Large Language Models (LLMs) in this 43-minute conference talk presented by Sarah Evans from Dell Technologies and Jay White from Microsoft at a Linux Foundation event. Delve into the potential security risks associated with emerging technologies like LLMs, focusing on a specific scenario of downloading a model from Hugging Face and applying it to internal datasets. Gain insights into applying established risk management frameworks such as NIST 800-53 (rev 5) and the emerging AI RMF 1.0 to LLM development and adoption. Learn about key risk control families including access control, incident response, configuration management, and supply chain risk management. Discover how to bridge the gap between traditional security fundamentals and LLM development, enabling more secure design and efficient enterprise implementation. Walk away with practical knowledge on pre-emptive risk management measures that can be directly applied to LLM projects, ensuring a more secure and robust development process.

Syllabus

Logically Securing the Illogically Logical Use of Large Language Models - Sarah Evans & Jay White


Taught by

Linux Foundation

Tags

Related Courses

Introduction aux conteneurs
Microsoft Virtual Academy via OpenClassrooms
DevOps for Developers: How to Get Started
Microsoft via edX
Configuration Management on Google Cloud Platform
Google via Coursera
Windows Server 2016: Infrastructure
Microsoft via edX
Introduction to SAP HANA Administration
SAP Learning