SOTAVerified

Efficient ViTs

Increasing the efficiency of ViTs without the modification of the architecture. (i.e., Key & Query Sparsification, Token pruning & merging)

Papers

Showing 110 of 32 papers

TitleStatusHype
ImagePiece: Content-aware Re-tokenization for Efficient Image Recognition0
M^2-ViT: Accelerating Hybrid Vision Transformers with Two-Level Mixed Quantization0
Trio-ViT: Post-Training Quantization and Acceleration for Softmax-Free Efficient Vision TransformerCode0
An FPGA-Based Reconfigurable Accelerator for Convolution-Transformer Hybrid EfficientViT0
Multi-criteria Token Fusion with One-step-ahead Attention for Efficient Vision TransformersCode1
GTP-ViT: Efficient Vision Transformers via Graph-based Token PropagationCode1
PPT: Token Pruning and Pooling for Efficient Vision TransformersCode1
Learned Thresholds Token Merging and Pruning for Vision TransformersCode1
MDViT: Multi-domain Vision Transformer for Small Medical Image Segmentation DatasetsCode1
ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision TransformerCode1
Show:102550
← PrevPage 1 of 4Next →

No leaderboard results yet.