SOTAVerified

Efficient ViTs

Increasing the efficiency of ViTs without the modification of the architecture. (i.e., Key & Query Sparsification, Token pruning & merging)

Papers

Showing 1120 of 32 papers

TitleStatusHype
AdaViT: Adaptive Tokens for Efficient Vision TransformerCode1
Adaptive Token Sampling For Efficient Vision TransformersCode1
Joint Token Pruning and Squeezing Towards More Aggressive Compression of Vision TransformersCode1
Learned Thresholds Token Merging and Pruning for Vision TransformersCode1
Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-AttentionCode1
Making Vision Transformers Efficient from A Token Sparsification ViewCode1
MDViT: Multi-domain Vision Transformer for Small Medical Image Segmentation DatasetsCode1
Multi-criteria Token Fusion with One-step-ahead Attention for Efficient Vision TransformersCode1
Not All Patches are What You Need: Expediting Vision Transformers via Token ReorganizationsCode1
Global Vision Transformer Pruning with Hessian-Aware SaliencyCode1
Show:102550
← PrevPage 2 of 4Next →

No leaderboard results yet.