Confidential Containers for GPU Compute - Incorporating LLMs in AI Lift-and-Shift Strategy
Offered By: CNCF [Cloud Native Computing Foundation] via YouTube
Course Description
Overview
Explore the evolution of confidential containers and their integration with GPU cloud-native stack for AI/ML workloads in this 32-minute conference talk. Delve into the transition from traditional to secure, isolated environments for sensitive data processing. Learn about the use of Kata for confidential container enablement, ensuring security while maintaining container flexibility. Examine a virtualization reference architecture supporting advanced scenarios like GPUdirect RDMA. Discover the lift-and-shift approach for seamless migration of existing AI/ML workloads to confidential environments. Understand how this integration combines LLMs with GPU-accelerated computing, leveraging Kubernetes for effective orchestration while balancing computational power and data privacy.
Syllabus
Confidential Containers for GPU Compute: Incorporating LLMs in a Lift-and-Shift Strategy for AI
Taught by
CNCF [Cloud Native Computing Foundation]
Related Courses
Secure and Fast MicroVM for Serverless ComputingGOTO Conferences via YouTube KVM Status Update and Kata Containers - Keynote Sessions
Linux Foundation via YouTube Introducing SPDK Vhost FUSE Target for Accelerated File Access in VMs and Containers
Linux Foundation via YouTube From Secure Container to Secure Service
Linux Foundation via YouTube Build Serverless with Kubernetes, Kata Containers and Bare Metal Cloud - Alibaba's Approach
Linux Foundation via YouTube