SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 311320 of 1356 papers

TitleStatusHype
An Automatic and Efficient BERT Pruning for Edge AI Systems0
Beyond the Tip of Efficiency: Uncovering the Submerged Threats of Jailbreak Attacks in Small Language Models0
Analysis of Quantization on MLP-based Vision Models0
AdaDeep: A Usage-Driven, Automated Deep Model Compression Framework for Enabling Ubiquitous Intelligent Mobiles0
Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments0
Beware of Calibration Data for Pruning Large Language Models0
Analysis of memory consumption by neural networks based on hyperparameters0
Benchmarking Adversarial Robustness of Compressed Deep Learning Models0
An Algorithm-Hardware Co-Optimized Framework for Accelerating N:M Sparse Transformers0
ACAM-KD: Adaptive and Cooperative Attention Masking for Knowledge Distillation0
Show:102550
← PrevPage 32 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified