SOTAVerified

Nested Hierarchical Transformer: Towards Accurate, Data-Efficient and Interpretable Visual Understanding

2021-05-26Code Available1· sign in to hype

Zizhao Zhang, Han Zhang, Long Zhao, Ting Chen, Sercan O. Arik, Tomas Pfister

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Hierarchical structures are popular in recent vision transformers, however, they require sophisticated designs and massive datasets to work well. In this paper, we explore the idea of nesting basic local transformers on non-overlapping image blocks and aggregating them in a hierarchical way. We find that the block aggregation function plays a critical role in enabling cross-block non-local information communication. This observation leads us to design a simplified architecture that requires minor code changes upon the original vision transformer. The benefits of the proposed judiciously-selected design are threefold: (1) NesT converges faster and requires much less training data to achieve good generalization on both ImageNet and small datasets like CIFAR; (2) when extending our key ideas to image generation, NesT leads to a strong decoder that is 8 faster than previous transformer-based generators; and (3) we show that decoupling the feature learning and abstraction processes via this nested hierarchy in our design enables constructing a novel method (named GradCAT) for visually interpreting the learned model. Source code is available https://github.com/google-research/nested-transformer.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CIFAR-10Transformer local-attention (NesT-B)Percentage correct97.2Unverified
CIFAR-100Transformer local-attention (NesT-B)Percentage correct82.56Unverified
ImageNetTransformer local-attention (NesT-B)Top 1 Accuracy83.8Unverified
ImageNetTransformer local-attention (NesT-S)Top 1 Accuracy83.3Unverified
ImageNetTransformer local-attention (NesT-T)Top 1 Accuracy81.5Unverified

Reproductions