Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding
Pengchuan Zhang, Xiyang Dai, Jianwei Yang, Bin Xiao, Lu Yuan, Lei Zhang, Jianfeng Gao
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/microsoft/vision-longformerOfficialIn paperpytorch★ 249
- github.com/microsoft/esvitpytorch★ 413
- github.com/microsoft/VisionLongformerForObjectDetectionpytorch★ 34
Abstract
This paper presents a new Vision Transformer (ViT) architecture Multi-Scale Vision Longformer, which significantly enhances the ViT of dosovitskiy2020image for encoding high-resolution images using two techniques. The first is the multi-scale model structure, which provides image encodings at multiple scales with manageable computational cost. The second is the attention mechanism of vision Longformer, which is a variant of Longformer beltagy2020longformer, originally developed for natural language processing, and achieves a linear complexity w.r.t. the number of input tokens. A comprehensive empirical study shows that the new ViT significantly outperforms several strong baselines, including the existing ViT models and their ResNet counterparts, and the Pyramid Vision Transformer from a concurrent work wang2021pyramid, on a range of vision tasks, including image classification, object detection, and segmentation. The models and source code are released at https://github.com/microsoft/vision-longformer.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| ImageNet | ViL-Small | Top 1 Accuracy | 82 | — | Unverified |
| ImageNet | ViL-Base-D | Top 1 Accuracy | 83.2 | — | Unverified |
| ImageNet | ViL-Medium-W | Top 1 Accuracy | 82.9 | — | Unverified |
| ImageNet | ViL-Base-W | Top 1 Accuracy | 81.9 | — | Unverified |
| ImageNet | ViL-Tiny-RPB | Top 1 Accuracy | 76.7 | — | Unverified |
| ImageNet | ViL-Medium-D | Top 1 Accuracy | 83.3 | — | Unverified |