SOTAVerified

Local-to-Global Self-Attention in Vision Transformers

2021-07-10Code Available0· sign in to hype

Jinpeng Li, Yichao Yan, Shengcai Liao, Xiaokang Yang, Ling Shao

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Transformers have demonstrated great potential in computer vision tasks. To avoid dense computations of self-attentions in high-resolution visual data, some recent Transformer models adopt a hierarchical design, where self-attentions are only computed within local windows. This design significantly improves the efficiency but lacks global feature reasoning in early stages. In this work, we design a multi-path structure of the Transformer, which enables local-to-global reasoning at multiple granularities in each stage. The proposed framework is computationally efficient and highly effective. With a marginal increasement in computational overhead, our model achieves notable improvements in both image classification and semantic segmentation. Code is available at https://github.com/ljpadam/LG-Transformer

Tasks

Reproductions