AI-Driven Performance Metaprogramming - Embedding Spaces for Program Analysis and Optimization
Offered By: Scalable Parallel Computing Lab, SPCL @ ETH Zurich via YouTube
Course Description
Overview
Explore the intersection of artificial intelligence and program performance optimization in this 45-minute talk from the Scalable Parallel Computing Lab at ETH Zurich. Delve into the concept of embedding spaces and their application in representing complex information across various domains. Examine how these embeddings can be leveraged to assess, analyze, and enhance program performance. Learn about the evolution of program representation techniques, starting from textual LLVM IR embeddings for GPU execution time prediction to more sophisticated graph-based representations. Discover the advantages of graph-based approaches in capturing crucial relationships like data dependencies and flows. Investigate DaCe's performance metaprogramming capabilities and its programmable graph-based IR. Gain insights into the use of graph neural networks (GNNs) for creating performance embeddings that capture general performance properties. Understand how these embeddings can be utilized in Performance Embeddings for Transfer Tuning to select optimization metaprograms for IR graph transformation.
Syllabus
AI-Driven Performance Metaprogramming
Taught by
Scalable Parallel Computing Lab, SPCL @ ETH Zurich
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Artificial Intelligence for Robotics
Stanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent