SOTAVerified

Omni-DETR: Omni-Supervised Object Detection with Transformers

2022-03-30CVPR 2022Code Available1· sign in to hype

Pei Wang, Zhaowei Cai, Hao Yang, Gurumurthy Swaminathan, Nuno Vasconcelos, Bernt Schiele, Stefano Soatto

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We consider the problem of omni-supervised object detection, which can use unlabeled, fully labeled and weakly labeled annotations, such as image tags, counts, points, etc., for object detection. This is enabled by a unified architecture, Omni-DETR, based on the recent progress on student-teacher framework and end-to-end transformer based object detection. Under this unified architecture, different types of weak labels can be leveraged to generate accurate pseudo labels, by a bipartite matching based filtering mechanism, for the model to learn. In the experiments, Omni-DETR has achieved state-of-the-art results on multiple datasets and settings. And we have found that weak annotations can help to improve detection performance and a mixture of them can achieve a better trade-off between annotation cost and accuracy than the standard complete annotation. These findings could encourage larger object detection datasets with mixture annotations. The code is available at https://github.com/amazon-research/omni-detr.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
COCO 10% labeled dataOmni-DETRmAP34.1Unverified
COCO 1% labeled dataOmni-DETRmAP18.6Unverified
COCO 2% labeled dataOmni-DETRmAP23.2Unverified
COCO 5% labeled dataOmni-DETRmAP30.2Unverified

Reproductions