SOTAVerified

FindMeIfYouCan: Bringing Open Set metrics to near , far and farther Out-of-Distribution Object Detection

2025-06-16Unverified0· sign in to hype

Daniel Montoya, Aymen Bouguerra, Alexandra Gomez-Villa, Fabio Arnez

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

State-of-the-art Object Detection (OD) methods predominantly operate under a closed-world assumption, where test-time categories match those encountered during training. However, detecting and localizing unknown objects is crucial for safety-critical applications in domains such as autonomous driving and medical imaging. Recently, Out-Of-Distribution (OOD) detection has emerged as a vital research direction for OD, focusing on identifying incorrect predictions typically associated with unknown objects. This paper shows that the current evaluation protocol for OOD-OD violates the assumption of non-overlapping objects with respect to the In-Distribution (ID) datasets, and obscures crucial situations such as ignoring unknown objects, potentially leading to overconfidence in deployment scenarios where truly novel objects might be encountered. To address these limitations, we manually curate, and enrich the existing benchmark by exploiting semantic similarity to create new evaluation splits categorized as near, far, and farther from ID distributions. Additionally, we incorporate established metrics from the Open Set community, providing deeper insights into how effectively methods detect unknowns, when they ignore them, and when they mistakenly classify OOD objects as ID. Our comprehensive evaluation demonstrates that semantically and visually close OOD objects are easier to localize than far ones, but are also more easily confounded with ID objects. Far and farther objects are harder to localize but less prone to be taken for an ID object.

Tasks

Reproductions