SOTAVerified

Efficient ViTs

Increasing the efficiency of ViTs without the modification of the architecture. (i.e., Key & Query Sparsification, Token pruning & merging)

Papers

Showing 110 of 32 papers

TitleStatusHype
Token Merging: Your ViT But FasterCode3
Fast Vision Transformers with HiLo AttentionCode2
Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-AttentionCode1
AdaViT: Adaptive Tokens for Efficient Vision TransformerCode1
Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention at Vision Transformer InferenceCode1
Chasing Sparsity in Vision Transformers: An End-to-End ExplorationCode1
DynamicViT: Efficient Vision Transformers with Dynamic Token SparsificationCode1
DiffRate : Differentiable Compression Rate for Efficient Vision TransformersCode1
Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision TransformerCode1
Adaptive Token Sampling For Efficient Vision TransformersCode1
Show:102550
← PrevPage 1 of 4Next →

No leaderboard results yet.