SOTAVerified

Efficient ViTs

Increasing the efficiency of ViTs without the modification of the architecture. (i.e., Key & Query Sparsification, Token pruning & merging)

Papers

Showing 1120 of 32 papers

TitleStatusHype
GTP-ViT: Efficient Vision Transformers via Graph-based Token PropagationCode1
Training data-efficient image transformers & distillation through attentionCode1
Joint Token Pruning and Squeezing Towards More Aggressive Compression of Vision TransformersCode1
Learned Thresholds Token Merging and Pruning for Vision TransformersCode1
All Tokens Matter: Token Labeling for Training Better Vision TransformersCode1
Making Vision Transformers Efficient from A Token Sparsification ViewCode1
MDViT: Multi-domain Vision Transformer for Small Medical Image Segmentation DatasetsCode1
Multi-criteria Token Fusion with One-step-ahead Attention for Efficient Vision TransformersCode1
Not All Patches are What You Need: Expediting Vision Transformers via Token ReorganizationsCode1
Global Vision Transformer Pruning with Hessian-Aware SaliencyCode1
Show:102550
← PrevPage 2 of 4Next →

No leaderboard results yet.