SOTAVerified

Efficient ViTs

Increasing the efficiency of ViTs without the modification of the architecture. (i.e., Key & Query Sparsification, Token pruning & merging)

Papers

Showing 2130 of 32 papers

TitleStatusHype
Chasing Sparsity in Vision Transformers: An End-to-End ExplorationCode1
DynamicViT: Efficient Vision Transformers with Dynamic Token SparsificationCode1
All Tokens Matter: Token Labeling for Training Better Vision TransformersCode1
Scalable Vision Transformers with Hierarchical PoolingCode1
Training data-efficient image transformers & distillation through attentionCode1
ImagePiece: Content-aware Re-tokenization for Efficient Image Recognition0
M^2-ViT: Accelerating Hybrid Vision Transformers with Two-Level Mixed Quantization0
Trio-ViT: Post-Training Quantization and Acceleration for Softmax-Free Efficient Vision TransformerCode0
An FPGA-Based Reconfigurable Accelerator for Convolution-Transformer Hybrid EfficientViT0
Beyond Attentive Tokens: Incorporating Token Importance and Diversity for Efficient Vision TransformersCode0
Show:102550
← PrevPage 3 of 4Next →

No leaderboard results yet.