SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 371380 of 1356 papers

TitleStatusHype
Activation Density based Mixed-Precision Quantization for Energy Efficient Neural Networks0
Differential Privacy Meets Federated Learning under Communication Constraints0
Dream Distillation: A Data-Independent Model Compression Framework0
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a Survey0
DiPaCo: Distributed Path Composition0
DipSVD: Dual-importance Protected SVD for Efficient LLM Compression0
Automatic Mixed-Precision Quantization Search of BERT0
Discrete Model Compression With Resource Constraint for Deep Neural Networks0
Deep Compression of Neural Networks for Fault Detection on Tennessee Eastman Chemical Processes0
Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration0
Show:102550
← PrevPage 38 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified