SOTAVerified

AutoFormer: Searching Transformers for Visual Recognition

2021-07-01ICCV 2021Code Available2· sign in to hype

Minghao Chen, Houwen Peng, Jianlong Fu, Haibin Ling

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recently, pure transformer-based models have shown great potentials for vision tasks such as image classification and detection. However, the design of transformer networks is challenging. It has been observed that the depth, embedding dimension, and number of heads can largely affect the performance of vision transformers. Previous models configure these dimensions based upon manual crafting. In this work, we propose a new one-shot architecture search framework, namely AutoFormer, dedicated to vision transformer search. AutoFormer entangles the weights of different blocks in the same layers during supernet training. Benefiting from the strategy, the trained supernet allows thousands of subnets to be very well-trained. Specifically, the performance of these subnets with weights inherited from the supernet is comparable to those retrained from scratch. Besides, the searched models, which we refer to AutoFormers, surpass the recent state-of-the-arts such as ViT and DeiT. In particular, AutoFormer-tiny/small/base achieve 74.7%/81.7%/82.4% top-1 accuracy on ImageNet with 5.7M/22.9M/53.7M parameters, respectively. Lastly, we verify the transferability of AutoFormer by providing the performance on downstream benchmarks and distillation experiments. Code and models are available at https://github.com/microsoft/AutoML.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Oxford 102 FlowersAutoFormer-S | 384Top 1 Accuracy98.8Unverified
Oxford-IIIT Pet DatasetAutoFormer-S | 384Accuracy94.9Unverified
Stanford CarsAutoFormer-S | 384Accuracy93.4Unverified

Reproductions