SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 13011310 of 1356 papers

TitleStatusHype
ELSA: Exploiting Layer-wise N:M Sparsity for Vision Transformer AccelerationCode0
Preserved central model for faster bidirectional compression in distributed settingsCode0
Privacy and Accuracy Implications of Model Complexity and Integration in Heterogeneous Federated LearningCode0
ML Research BenchmarkCode0
Einconv: Exploring Unexplored Tensor Network Decompositions for Convolutional Neural NetworksCode0
Self-Supervised Learning from Contrastive Mixtures for Personalized Speech EnhancementCode0
A Miniaturized Semantic Segmentation Method for Remote Sensing ImageCode0
Efficient Speech Translation through Model Compression and Knowledge DistillationCode0
Adversarial Robustness vs. Model Compression, or Both?Code0
Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMMCode0
Show:102550
← PrevPage 131 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified