SOTAVerified

DETReg: Unsupervised Pretraining with Region Priors for Object Detection

2021-06-08CVPR 2022Code Available1· sign in to hype

Amir Bar, Xin Wang, Vadim Kantorov, Colorado J Reed, Roei Herzig, Gal Chechik, Anna Rohrbach, Trevor Darrell, Amir Globerson

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent self-supervised pretraining methods for object detection largely focus on pretraining the backbone of the object detector, neglecting key parts of detection architecture. Instead, we introduce DETReg, a new self-supervised method that pretrains the entire object detection network, including the object localization and embedding components. During pretraining, DETReg predicts object localizations to match the localizations from an unsupervised region proposal generator and simultaneously aligns the corresponding feature embeddings with embeddings from a self-supervised image encoder. We implement DETReg using the DETR family of detectors and show that it improves over competitive baselines when finetuned on COCO, PASCAL VOC, and Airbus Ship benchmarks. In low-data regimes DETReg achieves improved performance, e.g., when training with only 1% of the labels and in the few-shot learning settings.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
COCO 2017DETReg (ours)AP30Unverified
MS-COCO (10-shot)DETReg-ft-full DDETRAP25Unverified
MS-COCO (30-shot)DETReg-ft-full DDETRAP30Unverified

Reproductions