SOTAVerified

Self-supervised representation learning via adaptive hard-positive mining

2021-01-01Unverified0· sign in to hype

Shaofeng Zhang, Junchi Yan, Xiaokang Yang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Despite their success in perception over the last decade, deep neural networks are also known ravenous to labeled data for training, which limits their applicability to real-world problems. Hence self-supervised learning has recently attracted intensive attention. Contrastive learning has been one of the dominant approaches for effective feature extraction and achieve state-of-the-art performance. In this paper, we first theoretically show that these methods cannot fully take advantage of training samples in the sense of hard positive samples mining. Then we propose a new contrastive method called AdpCLR^full (adaptive self-supervised contrastive learning representations), which can more effectively (supported by our proof) explore the samples in a way of being closer to supervised contrastive learning. We thoroughly evaluate the quality of the learned representation on ImageNet for both pretraining based version (AdpCLR^pre) and fully trained version (AdpCLR^full). The results of accuracy show AdpCLR^pre outperforms state-of-the-art contrastive-based models by 3.0\% with extra 100 epochs, while AdpCLR^full outperforms by 2.5\% with additional 600 epochs.

Tasks

Reproductions