UnSAM - Unsupervised Segmentation Anything Model: A New Approach to Image Segmentation
Offered By: Launchpad via YouTube
Course Description
Overview
Discover a groundbreaking approach to image segmentation in this 11-minute video presentation by the Fellowship.ai team. Delve into the innovative Unsupervised SAM (UnSAM) model, which revolutionizes automatic and promptable whole-image segmentation without the need for manual data labeling. Learn how UnSAM employs a divide-and-conquer strategy to uncover hierarchical structures in visual scenes, combining top-down and bottom-up clustering techniques. Explore the model's ability to partition images into instance and semantic level segments, merging pixels into larger groups to create multi-granular masks for training. Examine UnSAM's competitive performance across seven datasets, surpassing previous unsupervised segmentation benchmarks by 11% in AR. Understand how UnSAM enhances the traditional Segmentation Anything Model (SAM) when integrated with its self-supervised labels, outperforming it by over 6.7% in AR and 3.9% in AP on the SA-1B dataset with minimal labeled data. Gain insights into this new benchmark in unsupervised segmentation and the power of self-supervised learning in complex visual tasks.
Syllabus
Fellowship: UnSAM, Unsupervised Segmentation Anything Model A new approach to Image Segmentation
Taught by
Launchpad
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Computational Photography
Georgia Institute of Technology via Coursera Einführung in Computer Vision
Technische Universität München (Technical University of Munich) via Coursera Introduction to Computer Vision
Georgia Institute of Technology via Udacity