SOTAVerified

MetaFormer Baselines for Vision

2022-10-24Code Available3· sign in to hype

Weihao Yu, Chenyang Si, Pan Zhou, Mi Luo, Yichen Zhou, Jiashi Feng, Shuicheng Yan, Xinchao Wang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

MetaFormer, the abstracted architecture of Transformer, has been found to play a significant role in achieving competitive performance. In this paper, we further explore the capacity of MetaFormer, again, without focusing on token mixer design: we introduce several baseline models under MetaFormer using the most basic or common mixers, and summarize our observations as follows: (1) MetaFormer ensures solid lower bound of performance. By merely adopting identity mapping as the token mixer, the MetaFormer model, termed IdentityFormer, achieves >80% accuracy on ImageNet-1K. (2) MetaFormer works well with arbitrary token mixers. When specifying the token mixer as even a random matrix to mix tokens, the resulting model RandFormer yields an accuracy of >81%, outperforming IdentityFormer. Rest assured of MetaFormer's results when new token mixers are adopted. (3) MetaFormer effortlessly offers state-of-the-art results. With just conventional token mixers dated back five years ago, the models instantiated from MetaFormer already beat state of the art. (a) ConvFormer outperforms ConvNeXt. Taking the common depthwise separable convolutions as the token mixer, the model termed ConvFormer, which can be regarded as pure CNNs, outperforms the strong CNN model ConvNeXt. (b) CAFormer sets new record on ImageNet-1K. By simply applying depthwise separable convolutions as token mixer in the bottom stages and vanilla self-attention in the top stages, the resulting model CAFormer sets a new record on ImageNet-1K: it achieves an accuracy of 85.5% at 224x224 resolution, under normal supervised training without external data or distillation. In our expedition to probe MetaFormer, we also find that a new activation, StarReLU, reduces 71% FLOPs of activation compared with GELU yet achieves better performance. We expect StarReLU to find great potential in MetaFormer-like models alongside other neural networks.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ImageNet-AConvFormer-B36 (384)Top-1 accuracy %55.3Unverified
ImageNet-ACAFormer-B36Top-1 accuracy %48.5Unverified
ImageNet-AConvFormer-B36Top-1 accuracy %40.1Unverified
ImageNet-ACAFormer-B36 (IN-21K, 384)Top-1 accuracy %79.5Unverified
ImageNet-AConvFormer-B36 (IN-21K, 384)Top-1 accuracy %73.5Unverified
ImageNet-ACAFormer-B36 (IN-21K)Top-1 accuracy %69.4Unverified
ImageNet-AConvFormer-B36 (IN-21K)Top-1 accuracy %63.3Unverified
ImageNet-ACAFormer-B36 (384)Top-1 accuracy %61.9Unverified
ImageNet-CCAFormer-B36 (IN21K, 384)mean Corruption Error (mCE)30.8Unverified
ImageNet-CCAFormer-B36 (IN21K)mean Corruption Error (mCE)31.8Unverified
ImageNet-CConvFormer-B36 (IN21K)mean Corruption Error (mCE)35Unverified
ImageNet-CCAFormer-B36mean Corruption Error (mCE)42.6Unverified
ImageNet-CConvFormer-B36mean Corruption Error (mCE)46.3Unverified
ImageNet-RCAFormer-B36 (IN21K, 384)Top-1 Error Rate29.6Unverified
ImageNet-RCAFormer-B36 (IN21K)Top-1 Error Rate31.7Unverified
ImageNet-RConvFormer-B36 (IN21K, 384)Top-1 Error Rate33.5Unverified
ImageNet-RConvFormer-B36 (IN21K)Top-1 Error Rate34.7Unverified
ImageNet-RCAFormer-B36 (384)Top-1 Error Rate45Unverified
ImageNet-RCAFormer-B36Top-1 Error Rate46.1Unverified
ImageNet-RConvFormer-B36 (384)Top-1 Error Rate47.8Unverified
ImageNet-RConvFormer-B36Top-1 Error Rate48.9Unverified
ImageNet-SketchCAFormer-B36 (IN21K, 384)Top-1 accuracy54.5Unverified
ImageNet-SketchConvFormer-B36 (IN21K, 384)Top-1 accuracy52.9Unverified
ImageNet-SketchCAFormer-B36 (IN21K)Top-1 accuracy52.8Unverified
ImageNet-SketchConvFormer-B36 (IN21K)Top-1 accuracy52.7Unverified
ImageNet-SketchCAFormer-B36Top-1 accuracy42.5Unverified
ImageNet-SketchConvFormer-B36Top-1 accuracy39.5Unverified

Reproductions