SOTAVerified

Zero Shot Segmentation

Papers

Showing 101134 of 134 papers

TitleStatusHype
Learning Zero-Shot Material States Segmentation, by Implanting Natural Image Patterns in Synthetic DataCode0
From Generalization to Precision: Exploring SAM for Tool Segmentation in Surgical Environments0
Learning Segmented 3D Gaussians via Efficient Feature Unprojection for Zero-shot Neural Scene Segmentation0
SOS-Match: Segmentation for Open-Set Robust Correspondence Search and Robot Localization in Unstructured Environments0
Diffuse Attend and Segment: Unsupervised Zero-Shot Segmentation using Stable Diffusion0
Testing the Segment Anything Model on radiology data0
OpenSD: Unified Open-Vocabulary Segmentation and DetectionCode0
SANeRF-HQ: Segment Anything for NeRF in High Quality0
ZeroPS: High-quality Cross-modal Knowledge Transfer for Zero-Shot 3D Part Segmentation0
Holistic Inverse Rendering of Complex Facade via Aerial 3D Scanning0
Zero-Shot Segmentation of Eye Features Using the Segment Anything Model (SAM)Code0
Leveraging Large-Scale Pretrained Vision Foundation Models for Label-Efficient 3D Point Cloud Segmentation0
SILC: Improving Vision Language Pretraining with Self-Distillation0
Pedestrian Accessible Infrastructure Inventory: Assessing Zero-Shot Segmentation on Multi-Mode Geospatial Data for All Pedestrian Types0
Masked Momentum Contrastive Learning for Zero-shot Semantic Understanding0
Visual and Textual Prior Guided Mask Assemble for Few-Shot Segmentation and Beyond0
All-in-SAM: from Weak Annotation to Pixel-wise Nuclei Segmentation with Prompt-based Finetuning0
Topological Data Analysis Guided Segment Anything Model Prompt Optimization for Zero-Shot Segmentation in Biological Imaging0
Zero-shot spatial layout conditioning for text-to-image diffusion models0
Zero-Shot Anomaly Detection with Pre-trained Segmentation Models0
Segment Anything Meets Semantic Communication0
Segment Anything in High QualityCode0
Exploring Open-Vocabulary Semantic Segmentation without Human Labels0
SAM for Poultry Science0
Computer-Vision Benchmark Segment-Anything Model (SAM) in Medical Images: Accuracy in 12 Datasets0
SAM vs BET: A Comparative Study for Brain Extraction and Segmentation of Magnetic Resonance Images using Deep Learning0
Segment Anything Model (SAM) for Digital Pathology: Assess Zero-shot Segmentation on Whole Slide Imaging0
A Language-Guided Benchmark for Weakly Supervised Open Vocabulary Semantic SegmentationCode0
Exploring Open-Vocabulary Semantic Segmentation from CLIP Vision Encoder Distillation OnlyCode0
Open-world Semantic Segmentation via Contrasting and Clustering Vision-Language Embedding0
3D Compositional Zero-shot Learning with DeCompositional Consensus0
Self-supervised Tumor Segmentation through Layer Decomposition0
Consistent Structural Relation Learning for Zero-Shot Segmentation0
Unsupervised Deep Learning for Bayesian Brain MRI SegmentationCode0
Show:102550
← PrevPage 3 of 3Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Grounded HQ-SAMMean AP49.6Unverified
2Grounded-SAMMean AP46Unverified
3UNINEXTMean AP42.1Unverified
4HIPIEMean AP41.6Unverified
5SANMean AP41.4Unverified
6odiseMean AP38.7Unverified
7OpenSEEDMean AP36.1Unverified
8OpenSDMean AP35.8Unverified
9SGinW_Team (X-Decoder-L)Mean AP32.2Unverified
10SGinW_Team (X-Decoder-B)Mean AP27.7Unverified
#ModelMetricClaimedVerifiedStatus
1COSMOS ViT-B/16mIoU17.7Unverified
2GEM (MetaCLIP)mIoU17.1Unverified
3GEM (CLIP)mIoU15.7Unverified
4CLIPSurgerymIoU12.9Unverified
5MaskCLIPmIoU10.2Unverified