SOTAVerified

Mixed Pseudo Labels for Semi-Supervised Object Detection

2023-12-12Code Available1· sign in to hype

Zeming Chen, Wenwei Zhang, Xinjiang Wang, Kai Chen, Zhi Wang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

While the pseudo-label method has demonstrated considerable success in semi-supervised object detection tasks, this paper uncovers notable limitations within this approach. Specifically, the pseudo-label method tends to amplify the inherent strengths of the detector while accentuating its weaknesses, which is manifested in the missed detection of pseudo-labels, particularly for small and tail category objects. To overcome these challenges, this paper proposes Mixed Pseudo Labels (MixPL), consisting of Mixup and Mosaic for pseudo-labeled data, to mitigate the negative impact of missed detections and balance the model's learning across different object scales. Additionally, the model's detection performance on tail categories is improved by resampling labeled data with relevant instances. Notably, MixPL consistently improves the performance of various detectors and obtains new state-of-the-art results with Faster R-CNN, FCOS, and DINO on COCO-Standard and COCO-Full benchmarks. Furthermore, MixPL also exhibits good scalability on large models, improving DINO Swin-L by 2.5% mAP and achieving nontrivial new records (60.2% mAP) on the COCO val2017 benchmark without extra annotations.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
COCO 100% labeled dataMixPLmAP55.2Unverified
COCO 10% labeled dataMixPLmAP44.6Unverified
COCO 1% labeled dataMixPLmAP31.7Unverified
COCO 2% labeled dataMixPLmAP34.7Unverified
COCO 5% labeled dataMixPLmAP40.1Unverified

Reproductions