SOTAVerified

Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer

2021-06-07Code Available1· sign in to hype

Zilong Huang, Youcheng Ben, Guozhong Luo, Pei Cheng, Gang Yu, Bin Fu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Very recently, Window-based Transformers, which computed self-attention within non-overlapping local windows, demonstrated promising results on image classification, semantic segmentation, and object detection. However, less study has been devoted to the cross-window connection which is the key element to improve the representation ability. In this work, we revisit the spatial shuffle as an efficient way to build connections among windows. As a result, we propose a new vision transformer, named Shuffle Transformer, which is highly efficient and easy to implement by modifying two lines of code. Furthermore, the depth-wise convolution is introduced to complement the spatial shuffle for enhancing neighbor-window connections. The proposed architectures achieve excellent performance on a wide range of visual tasks including image-level classification, object detection, and semantic segmentation. Code will be released for reproduction.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ADE20KUperNet Shuffle-BValidation mIoU50.5Unverified
ADE20KUperNet Shuffle-TValidation mIoU47.6Unverified
ADE20K valUperNet Shuffle-BmIoU50.5Unverified
ADE20K valUperNet Shuffle-SmIoU49.6Unverified
ADE20K valUperNet Shuffle-TmIoU47.6Unverified

Reproductions