SOTAVerified

Efficient ViTs

Increasing the efficiency of ViTs without the modification of the architecture. (i.e., Key & Query Sparsification, Token pruning & merging)

Papers

Showing 2632 of 32 papers

TitleStatusHype
Trio-ViT: Post-Training Quantization and Acceleration for Softmax-Free Efficient Vision TransformerCode0
Beyond Attentive Tokens: Incorporating Token Importance and Diversity for Efficient Vision TransformersCode0
Patch Slimming for Efficient Vision Transformers0
M^2-ViT: Accelerating Hybrid Vision Transformers with Two-Level Mixed Quantization0
ImagePiece: Content-aware Re-tokenization for Efficient Image Recognition0
IA-RED^2: Interpretability-Aware Redundancy Reduction for Vision Transformers0
An FPGA-Based Reconfigurable Accelerator for Convolution-Transformer Hybrid EfficientViT0
Show:102550
← PrevPage 2 of 2Next →

No leaderboard results yet.