SOTAVerified

Efficient ViTs

Increasing the efficiency of ViTs without the modification of the architecture. (i.e., Key & Query Sparsification, Token pruning & merging)

Papers

Showing 2130 of 32 papers

TitleStatusHype
Training data-efficient image transformers & distillation through attentionCode1
PPT: Token Pruning and Pooling for Efficient Vision TransformersCode1
Scalable Vision Transformers with Hierarchical PoolingCode1
Pruning Self-attentions into Convolutional Layers in Single PathCode1
ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision TransformerCode1
M^2-ViT: Accelerating Hybrid Vision Transformers with Two-Level Mixed Quantization0
An FPGA-Based Reconfigurable Accelerator for Convolution-Transformer Hybrid EfficientViT0
IA-RED^2: Interpretability-Aware Redundancy Reduction for Vision Transformers0
ImagePiece: Content-aware Re-tokenization for Efficient Image Recognition0
Patch Slimming for Efficient Vision Transformers0
Show:102550
← PrevPage 3 of 4Next →

No leaderboard results yet.