DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR
Shilong Liu, Feng Li, Hao Zhang, Xiao Yang, Xianbiao Qi, Hang Su, Jun Zhu, Lei Zhang
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/slongliu/dab-detrOfficialIn paperpytorch★ 575
- github.com/IDEA-Research/detrexpytorch★ 2,274
- github.com/alibaba/EasyCVpytorch★ 1,949
- github.com/idea-research/dn-detrpytorch★ 604
- github.com/idea-research/dab-detrpytorch★ 575
- github.com/horrible-dong/teamdetrpytorch★ 21
- github.com/helq2612/biadtpytorch★ 9
- github.com/Tajamul21/Detection-Classification-and-Semantic_Segmentation-of-applespytorch★ 2
Abstract
We present in this paper a novel query formulation using dynamic anchor boxes for DETR (DEtection TRansformer) and offer a deeper understanding of the role of queries in DETR. This new formulation directly uses box coordinates as queries in Transformer decoders and dynamically updates them layer-by-layer. Using box coordinates not only helps using explicit positional priors to improve the query-to-feature similarity and eliminate the slow training convergence issue in DETR, but also allows us to modulate the positional attention map using the box width and height information. Such a design makes it clear that queries in DETR can be implemented as performing soft ROI pooling layer-by-layer in a cascade manner. As a result, it leads to the best performance on MS-COCO benchmark among the DETR-like detection models under the same setting, e.g., AP 45.7\% using ResNet50-DC5 as backbone trained in 50 epochs. We also conducted extensive experiments to confirm our analysis and verify the effectiveness of our methods. Code is available at https://github.com/SlongLiu/DAB-DETR.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| COCO minival | DAB-DETR-DC5-R101 | box AP | 46.6 | — | Unverified |
| COCO minival | DAB-DETR-R101 | box AP | 44.1 | — | Unverified |