SOTAVerified

SimLTD: Simple Supervised and Semi-Supervised Long-Tailed Object Detection

2024-12-28CVPR 2025Code Available1· sign in to hype

Phi Vu Tran

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent years have witnessed tremendous advances on modern visual recognition systems. Despite such progress, many vision models still struggle with the open problem of learning from few exemplars. This paper focuses on the task of object detection in the setting where object classes follow a natural long-tailed distribution. Existing approaches to long-tailed detection resort to external ImageNet labels to augment the low-shot training instances. However, such dependency on a large labeled database is impractical and has limited utility in realistic scenarios. We propose a more versatile approach to leverage optional unlabeled images, which are easy to collect without the burden of human annotations. Our SimLTD framework is straightforward and intuitive, and consists of three simple steps: (1) pre-training on abundant head classes; (2) transfer learning on scarce tail classes; and (3) fine-tuning on a sampled set of both head and tail classes. Our approach can be viewed as an improved head-to-tail model transfer paradigm without the added complexities of meta-learning or knowledge distillation, as was required in past research. By harnessing supplementary unlabeled images, without extra image labels, SimLTD establishes new record results on the challenging LVIS v1 benchmark across both supervised and semi-supervised settings.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
LVIS v1.0 valSimLTD w/MixPL (Swin-L + COCO unlabeled images)box AP51.5Unverified
LVIS v1.0 valSimLTD Fully Supervised (Swin-L)box AP49.8Unverified

Reproductions