SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 611620 of 1356 papers

TitleStatusHype
An Automatic and Efficient BERT Pruning for Edge AI Systems0
Discrete Model Compression With Resource Constraint for Deep Neural Networks0
Beyond the Tip of Efficiency: Uncovering the Submerged Threats of Jailbreak Attacks in Small Language Models0
DipSVD: Dual-importance Protected SVD for Efficient LLM Compression0
DiPaCo: Distributed Path Composition0
Analysis of Quantization on MLP-based Vision Models0
AdaDeep: A Usage-Driven, Automated Deep Model Compression Framework for Enabling Ubiquitous Intelligent Mobiles0
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a Survey0
Beware of Calibration Data for Pruning Large Language Models0
Differential Privacy Meets Federated Learning under Communication Constraints0
Show:102550
← PrevPage 62 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified