Separable Self-attention for Mobile Vision Transformers
Sachin Mehta, Mohammad Rastegari
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/apple/ml-cvnetsOfficialIn paperpytorch★ 1,968
- github.com/rwightman/pytorch-image-modelspytorch★ 36,538
- github.com/jaiwei98/mobile-vit-pytorchpytorch★ 58
- github.com/IMvision12/keras-vision-modelspytorch★ 8
- github.com/t0nyliang/EEGMobilepytorch★ 3
- gitlab.com/birder/birderpytorch★ 0
- github.com/mindspore-courses/External-Attention-MindSpore/blob/main/model/attention/MobileViTv2Attention.pymindspore★ 0
- github.com/leondgarse/keras_cv_attention_models/tree/main/keras_cv_attention_models/mobilevittf★ 0
Abstract
Mobile vision transformers (MobileViT) can achieve state-of-the-art performance across several mobile vision tasks, including classification and detection. Though these models have fewer parameters, they have high latency as compared to convolutional neural network-based models. The main efficiency bottleneck in MobileViT is the multi-headed self-attention (MHA) in transformers, which requires O(k^2) time complexity with respect to the number of tokens (or patches) k. Moreover, MHA requires costly operations (e.g., batch-wise matrix multiplication) for computing self-attention, impacting latency on resource-constrained devices. This paper introduces a separable self-attention method with linear complexity, i.e. O(k). A simple yet effective characteristic of the proposed method is that it uses element-wise operations for computing self-attention, making it a good choice for resource-constrained devices. The improved model, MobileViTv2, is state-of-the-art on several mobile vision tasks, including ImageNet object classification and MS-COCO object detection. With about three million parameters, MobileViTv2 achieves a top-1 accuracy of 75.6% on the ImageNet dataset, outperforming MobileViT by about 1% while running 3.2 faster on a mobile device. Our source code is available at: https://github.com/apple/ml-cvnets
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| ImageNet | MobileViTv2-1.0 | Top 1 Accuracy | 78.1 | — | Unverified |
| ImageNet | MobileViTv2-0.75 | Top 1 Accuracy | 75.6 | — | Unverified |
| ImageNet | MobileViTv2-0.5 | Top 1 Accuracy | 70.2 | — | Unverified |