SOTAVerified

Efficient ViTs

Increasing the efficiency of ViTs without the modification of the architecture. (i.e., Key & Query Sparsification, Token pruning & merging)

Papers

Showing 1120 of 32 papers

TitleStatusHype
DiffRate : Differentiable Compression Rate for Efficient Vision TransformersCode1
Joint Token Pruning and Squeezing Towards More Aggressive Compression of Vision TransformersCode1
Making Vision Transformers Efficient from A Token Sparsification ViewCode1
Beyond Attentive Tokens: Incorporating Token Importance and Diversity for Efficient Vision TransformersCode0
Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention at Vision Transformer InferenceCode1
Token Merging: Your ViT But FasterCode3
Adaptive Sparse ViT: Towards Learnable Adaptive Token Pruning by Fully Exploiting Self-AttentionCode1
Fast Vision Transformers with HiLo AttentionCode2
Not All Patches are What You Need: Expediting Vision Transformers via Token ReorganizationsCode1
SPViT: Enabling Faster Vision Transformers via Soft Token PruningCode1
Show:102550
← PrevPage 2 of 4Next →

No leaderboard results yet.