SOTAVerified

Efficient ViTs

Increasing the efficiency of ViTs without the modification of the architecture. (i.e., Key & Query Sparsification, Token pruning & merging)

Papers

Showing 2130 of 32 papers

TitleStatusHype
SPViT: Enabling Faster Vision Transformers via Soft Token PruningCode1
PPT: Token Pruning and Pooling for Efficient Vision TransformersCode1
Scalable Vision Transformers with Hierarchical PoolingCode1
Pruning Self-attentions into Convolutional Layers in Single PathCode1
ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision TransformerCode1
Trio-ViT: Post-Training Quantization and Acceleration for Softmax-Free Efficient Vision TransformerCode0
Beyond Attentive Tokens: Incorporating Token Importance and Diversity for Efficient Vision TransformersCode0
Patch Slimming for Efficient Vision Transformers0
M^2-ViT: Accelerating Hybrid Vision Transformers with Two-Level Mixed Quantization0
ImagePiece: Content-aware Re-tokenization for Efficient Image Recognition0
Show:102550
← PrevPage 3 of 4Next →

No leaderboard results yet.