Authorization Best Practices for Systems Using Large Language Models
Offered By: Cloud Security Alliance via YouTube
Course Description
Overview
Explore authorization best practices for systems utilizing Large Language Models in this 26-minute conference talk by the Cloud Security Alliance. Gain insights into the unique security considerations that arise with the integration of LLMs, including prompt injection attacks and vector database risks. Discover the components and design patterns involved in LLM-based systems, focusing on authorization implications specific to each element. Learn about best practices and patterns for various use cases, such as retrieval augmented generation (RAG) with vector databases, API calls to external systems, and SQL queries generated by LLMs. Delve into the fundamental concerns surrounding the development of agentic systems, equipping yourself with essential knowledge to build more robust and secure LLM-powered applications.
Syllabus
Authorization best practices for systems using Large Language Models
Taught by
Cloud Security Alliance
Related Courses
Architecting Microsoft Azure SolutionsMicrosoft via edX Internetwork Security
Indian Institute of Technology, Kharagpur via Swayam Network Security
Georgia Institute of Technology via Udacity Microsoft Professional Orientation : Cloud Administration
Microsoft via edX Cyber Threats and Attack Vectors
University of Colorado System via Coursera