SOTAVerified

Exploring Target Representations for Masked Autoencoders

2022-09-08Code Available0· sign in to hype

Xingbin Liu, Jinghao Zhou, Tao Kong, Xianming Lin, Rongrong Ji

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Masked autoencoders have become popular training paradigms for self-supervised visual representation learning. These models randomly mask a portion of the input and reconstruct the masked portion according to the target representations. In this paper, we first show that a careful choice of the target representation is unnecessary for learning good representations, since different targets tend to derive similarly behaved models. Driven by this observation, we propose a multi-stage masked distillation pipeline and use a randomly initialized model as the teacher, enabling us to effectively train high-capacity models without any efforts to carefully design target representations. Interestingly, we further explore using teachers of larger capacity, obtaining distilled students with remarkable transferring ability. On different tasks of classification, transfer learning, object detection, and semantic segmentation, the proposed method to perform masked knowledge distillation with bootstrapped teachers (dBOT) outperforms previous self-supervised methods by nontrivial margins. We hope our findings, as well as the proposed method, could motivate people to rethink the roles of target representations in pre-training masked autoencoders.The code and pre-trained models are publicly available at https://github.com/liuxingbin/dbot.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ImageNetdBOT ViT-B (CLIP as Teacher)Top 1 Accuracy85.7Unverified
ImageNetdBOT ViT-H (CLIP as Teacher)Top 1 Accuracy88.2Unverified
ImageNetdBOT ViT-L (CLIP as Teacher)Top 1 Accuracy87.8Unverified

Reproductions