Rethinking and Improving Relative Position Encoding for Vision Transformer - Lecture 23
Offered By: University of Central Florida via YouTube
Course Description
Overview
Explore the advancements in Relative Position Encoding (RPE) for Vision Transformers in this 32-minute lecture from the University of Central Florida's CAP6412 2022 series. Delve into the background of self-attention mechanisms and position encoding techniques, focusing on the evolution from absolute to relative position encoding. Examine the improvements made to RPE, including the introduction of bias and contextual modes, a piecewise index function, and 2D relative position calculation. Analyze experimental results comparing directed vs. undirected approaches, shared vs. unshared implementations, and the impact of different numbers of buckets. Gain insights into component-wise analysis, complexity considerations, and visualizations that demonstrate the effectiveness of these enhancements. Conclude with a comprehensive understanding of how these innovations contribute to the performance of Vision Transformers in various computer vision tasks.
Syllabus
Intro
Background and previous work
Self-attention
Absolute Position Encoding and Relative Position Encoding (RPE)
RPE in Transformer-XL
Bias and Contextual Mode
A Piecewise Index Function
2D Relative Position Calculation
Experiments
Implementation details
Directed vs. Undirected Bias vs. Contextual
Shared v.s. Unshared
Piecewise v.s. Clip
Number of buckets
Component-wise analysis
Complexity Analysis
Visualization
Conclusion
Taught by
UCF CRCV
Tags
Related Courses
AWS Certified Machine Learning - Specialty (LA)A Cloud Guru Google Cloud AI Services Deep Dive
A Cloud Guru Introduction to Machine Learning
A Cloud Guru Deep Learning and Python Programming for AI with Microsoft Azure
Cloudswyft via FutureLearn Advanced Artificial Intelligence on Microsoft Azure: Deep Learning, Reinforcement Learning and Applied AI
Cloudswyft via FutureLearn