SOTAVerified

Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers

2023-08-18Code Available1· sign in to hype

Tobias Christian Nauen, Sebastian Palacio, Federico Raue, Andreas Dengel

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Self-attention in Transformers comes with a high computational cost because of their quadratic computational complexity, but their effectiveness in addressing problems in language and vision has sparked extensive research aimed at enhancing their efficiency. However, diverse experimental conditions, spanning multiple input domains, prevent a fair comparison based solely on reported results, posing challenges for model selection. To address this gap in comparability, we perform a large-scale benchmark of more than 45 models for image classification, evaluating key efficiency aspects, including accuracy, speed, and memory usage. Our benchmark provides a standardized baseline for efficiency-oriented transformers. We analyze the results based on the Pareto front -- the boundary of optimal models. Surprisingly, despite claims of other models being more efficient, ViT remains Pareto optimal across multiple metrics. We observe that hybrid attention-CNN models exhibit remarkable inference memory- and parameter-efficiency. Moreover, our benchmark shows that using a larger model in general is more efficient than using higher resolution images. Thanks to our holistic evaluation, we provide a centralized resource for practitioners and researchers, facilitating informed decisions when selecting or developing efficient transformers.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ImageNetWave-ViT-STop 1 Accuracy83.61Unverified
ImageNetCaiT-S24Top 1 Accuracy84.91Unverified
ImageNetWave-ViT-STop 1 Accuracy83.9Unverified
ImageNetXCiT-STop 1 Accuracy83.65Unverified
ImageNetSwinV2-TiTop 1 Accuracy83.09Unverified
ImageNetViT-STop 1 Accuracy82.54Unverified
ImageNetEViT (delete)Top 1 Accuracy82.29Unverified
ImageNetSTViT-Swin-TiTop 1 Accuracy82.22Unverified
ImageNetToMe-ViT-STop 1 Accuracy82.11Unverified
ImageNetEViT (fuse)Top 1 Accuracy81.96Unverified
ImageNetGFNet-STop 1 Accuracy81.33Unverified
ImageNetDynamicViT-STop 1 Accuracy81.09Unverified
ImageNetTokenLearner-ViT-8Top 1 Accuracy80.66Unverified
ImageNetCoaT-TiTop 1 Accuracy78.42Unverified
ImageNetPoly-SA-ViT-STop 1 Accuracy78.34Unverified
ImageNetEfficientFormer-V2-S0Top 1 Accuracy71.53Unverified

Reproductions