SOTAVerified

Efficient ViTs

Increasing the efficiency of ViTs without the modification of the architecture. (i.e., Key & Query Sparsification, Token pruning & merging)

Papers

Showing 2130 of 32 papers

TitleStatusHype
AdaViT: Adaptive Tokens for Efficient Vision TransformerCode1
Adaptive Token Sampling For Efficient Vision TransformersCode1
Pruning Self-attentions into Convolutional Layers in Single PathCode1
Global Vision Transformer Pruning with Hessian-Aware SaliencyCode1
Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision TransformerCode1
IA-RED^2: Interpretability-Aware Redundancy Reduction for Vision Transformers0
Chasing Sparsity in Vision Transformers: An End-to-End ExplorationCode1
Patch Slimming for Efficient Vision Transformers0
DynamicViT: Efficient Vision Transformers with Dynamic Token SparsificationCode1
All Tokens Matter: Token Labeling for Training Better Vision TransformersCode1
Show:102550
← PrevPage 3 of 4Next →

No leaderboard results yet.