SOTAVerified

Zero-Shot Semantic Segmentation

Papers

Showing 150 of 60 papers

TitleStatusHype
Exploring Regional Clues in CLIP for Zero-Shot Semantic SegmentationCode3
EfficientNet: Rethinking Model Scaling for Convolutional Neural NetworksCode3
FLAIR: VLM with Fine-grained Language-informed Image RepresentationsCode2
DiffCut: Catalyzing Zero-Shot Semantic Segmentation with Diffusion Features and Recursive Normalized CutCode2
Cascade-CLIP: Cascaded Vision-Language Embeddings Alignment for Zero-Shot Semantic SegmentationCode2
ZegCLIP: Towards Adapting CLIP for Zero-shot Semantic SegmentationCode2
OpenObj: Open-Vocabulary Object-Level Neural Radiance Fields with Fine-Grained UnderstandingCode1
OTSeg: Multi-prompt Sinkhorn Attention for Zero-Shot Semantic SegmentationCode1
Spectral Prompt Tuning:Unveiling Unseen Classes for Zero-Shot Semantic SegmentationCode1
SCLIP: Rethinking Self-Attention for Dense Vision-Language InferenceCode1
CLIP-DIY: CLIP Dense Inference Yields Open-Vocabulary Semantic Segmentation For-FreeCode1
What a MESS: Multi-Domain Evaluation of Zero-Shot Semantic SegmentationCode1
Delving into Shape-aware Zero-shot Semantic SegmentationCode1
Open-Vocabulary Semantic Segmentation with Decoupled One-Pass NetworkCode1
ZegOT: Zero-shot Segmentation Through Optimal Transport of Text PromptsCode1
Zero-Shot Point Cloud Segmentation by Semantic-Visual Aware SynthesisCode1
Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language ModelsCode1
A Simple Baseline for Open-Vocabulary Semantic Segmentation with Pre-trained Vision-language ModelCode1
Decoupling Zero-Shot Semantic SegmentationCode1
Extract Free Dense Labels from CLIPCode1
A Closer Look at Self-training for Zero-Label Semantic SegmentationCode1
From Pixel to Patch: Synthesize Context-aware Features for Zero-shot Semantic SegmentationCode1
Context-aware Feature Generation for Zero-shot Semantic SegmentationCode1
Zero-Shot Semantic SegmentationCode1
Split Matching for Inductive Zero-shot Semantic Segmentation0
3D-PointZshotS: Geometry-Aware 3D Point Cloud Zero-Shot Semantic Segmentation Narrowing the Visual-Semantic GapCode0
Bridge the Gap Between Visual and Linguistic Comprehension for Generalized Zero-shot Semantic Segmentation0
Disentangling CLIP for Multi-Object Perception0
Open-RGBT: Open-vocabulary RGB-T Zero-shot Semantic Segmentation in Open-world Environments0
Segment Anything Model for automated image data annotation: empirical studies using text prompts from Grounding DINO0
AlignZeg: Mitigating Objective Misalignment for Zero-shot Semantic Segmentation0
Semantics from Space: Satellite-Guided Thermal Semantic Segmentation Annotation for Aerial Field RobotsCode0
Annotation Free Semantic Segmentation with Vision Foundation Models0
Language-Driven Visual Consensus for Zero-Shot Semantic Segmentation0
Learning Segmented 3D Gaussians via Efficient Feature Unprojection for Zero-shot Neural Scene Segmentation0
Unlocking the Potential of Pre-trained Vision Transformers for Few-Shot Semantic Segmentation through Relationship DescriptorsCode0
CSL: Class-Agnostic Structure-Constrained Learning for Segmentation Including the Unseen0
SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding0
CLIP Is Also a Good Teacher: A New Learning Framework for Inductive Zero-shot Semantic Segmentation0
An easy zero-shot learning combination: Texture Sensitive Semantic Segmentation IceHrNet and Advanced Style Transfer Learning StrategyCode0
Masked Momentum Contrastive Learning for Zero-shot Semantic Understanding0
MixReorg: Cross-Modal Mixed Patch Reorganization is a Good Mask Learner for Open-World Semantic Segmentation0
Exploring Open-Vocabulary Semantic Segmentation without Human Labels0
Interactive Segment Anything NeRF with Feature Imitation0
MVP-SEG: Multi-View Prompt Learning for Open-Vocabulary Semantic Segmentation0
[CLS] Token is All You Need for Zero-Shot Semantic Segmentation0
SATR: Zero-Shot Semantic Segmentation of 3D Shapes0
Class Enhancement Losses with Pseudo Labels for Zero-shot Semantic Segmentation0
Exploring Open-Vocabulary Semantic Segmentation from CLIP Vision Encoder Distillation OnlyCode0
FreeSeg: Free Mask from Interpretable Contrastive Language-Image Pretraining for Semantic Segmentation0
Show:102550
← PrevPage 1 of 2Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1OTSeg+Transductive Setting hIoU49.8Unverified
2CLIP-RCTransductive Setting hIoU49.7Unverified
3OTSegTransductive Setting hIoU49.5Unverified
4ZegCLIPTransductive Setting hIoU48.5Unverified
5MVP-SEG+Transductive Setting hIoU45.5Unverified
6FreeSegTransductive Setting hIoU45.3Unverified
7MaskCLIP+Transductive Setting hIoU45Unverified
8zssegTransductive Setting hIoU41.5Unverified
9DeOPInductive Setting hIoU38.2Unverified
10STRICTTransductive Setting hIoU34.8Unverified
#ModelMetricClaimedVerifiedStatus
1CAT-Seg-LMean IoU38.14Unverified
2CAT-Seg-HMean IoU35.66Unverified
3CAT-Seg-BMean IoU33.74Unverified
4SAN-LMean IoU30.06Unverified
5Grounded-SAM-LMean IoU29.05Unverified
6Grounded-SAM-HMean IoU28.78Unverified
7Grounded-SAM-BMean IoU28.52Unverified
8OVSeg-LMean IoU26.94Unverified
9SAN-BMean IoU26.74Unverified
10OpenSeeD-TMean IoU24.33Unverified
#ModelMetricClaimedVerifiedStatus
1OTSeg+Transductive Setting hIoU94.4Unverified
2OTSegTransductive Setting hIoU94.2Unverified
3CLIP-RCTransductive Setting hIoU93Unverified
4ZegCLIPTransductive Setting hIoU91.1Unverified
5MaskCLIP+Transductive Setting hIoU87.4Unverified
6FreeSegTransductive Setting hIoU86.9Unverified
7DeOpInductive Setting hIoU80.8Unverified
8zssegTransductive Setting hIoU79.3Unverified
9ZegFormerInductive Setting hIoU73.3Unverified
10STRICTTransductive Setting hIoU49.8Unverified
#ModelMetricClaimedVerifiedStatus
1MAFTunseen mIoU8.7Unverified