SOTAVerified

Open Vocabulary Object Detection

Open-vocabulary detection (OVD) aims to generalize beyond the limited number of base classes labeled during the training phase. The goal is to detect novel classes defined by an unbounded (open) vocabulary at inference.

Papers

Showing 125 of 145 papers

TitleStatusHype
VLM-R1: A Stable and Generalizable R1-style Large Vision-Language ModelCode9
YOLO-World: Real-Time Open-Vocabulary Object DetectionCode9
Visual-RFT: Visual Reinforcement Fine-TuningCode7
Real-time Transformer-based Open-Vocabulary Detection with Efficient Fusion HeadCode5
Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive ReinforcementCode4
FG-CLIP: Fine-Grained Visual and Textual AlignmentCode4
GLIPv2: Unifying Localization and Vision-Language UnderstandingCode4
Detecting Twenty-thousand Classes using Image-level SupervisionCode3
Locate Anything on Earth: Advancing Open-Vocabulary Object Detection for Remote Sensing CommunityCode3
OmDet: Large-scale vision-language multi-dataset pre-training with multimodal detection networkCode3
OVLW-DETR: Open-Vocabulary Light-Weighted Detection TransformerCode3
SHiNe: Semantic Hierarchy Nexus for Open-vocabulary Object DetectionCode2
YOLOv8-AM: YOLOv8 Based on Effective Attention Mechanisms for Pediatric Wrist Fracture DetectionCode2
Open Vocabulary Monocular 3D Object DetectionCode2
OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance SegmentationCode2
Open-Vocabulary DETR with Conditional MatchingCode2
LaMI-DETR: Open-Vocabulary Detection with Language Model InstructionCode2
Is CLIP the main roadblock for fine-grained open-world perception?Code2
Bridging the Gap between Object and Image-level Representations for Open-Vocabulary DetectionCode2
Cross-Domain Few-Shot Object Detection via Enhanced Open-Set Object DetectorCode2
CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense PredictionCode2
Detect Everything with Few ExamplesCode2
Generative Region-Language Pretraining for Open-Ended Object DetectionCode2
Mamba-YOLO-World: Marrying YOLO-World with Mamba for Open-Vocabulary DetectionCode2
PointCLIP V2: Prompting CLIP and GPT for Powerful 3D Open-world LearningCode2
Show:102550
← PrevPage 1 of 6Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Cooperative Foundational ModelsAP 0.550.3Unverified
2DE-ViTAP 0.550Unverified
3Yolov8-nanoAP 0.547.2Unverified
4DITOAP 0.546.1Unverified
5OV-DQUO(RN50x4)AP 0.545.6Unverified
6LP-OVOD (OWL-ViT Proposals)AP 0.544.9Unverified
7CLIPSelfAP 0.544.3Unverified
8CORA+AP 0.543.1Unverified
9BARONAP 0.542.7Unverified
10SIA-OVD (RN50x4)AP 0.541.9Unverified
#ModelMetricClaimedVerifiedStatus
1LaMI-DETRAP novel-LVIS base training43.4Unverified
2DITOAP novel-LVIS base training40.4Unverified
3OV-DQUO(ViT-L/14)AP novel-LVIS base training39.3Unverified
4CoDet (EVA02-L)AP novel-LVIS base training37Unverified
5CLIPSelfAP novel-LVIS base training34.9Unverified
6OVMRAP novel-LVIS base training34.4Unverified
7DE-ViTAP novel-LVIS base training34.3Unverified
8CFM-ViTAP novel-LVIS base training33.9Unverified
9CLIM (RN50x64)AP novel-LVIS base training32.3Unverified
10RO-ViTAP novel-LVIS base training32.1Unverified
#ModelMetricClaimedVerifiedStatus
1Object-Centric-OVDmask AP5022.3Unverified
2ViLDmask AP5018.2Unverified
#ModelMetricClaimedVerifiedStatus
1Object-Centric-OVDmask AP5042.9Unverified
2Deticmask AP5042.2Unverified