SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 381390 of 1356 papers

TitleStatusHype
Deep Compression of Neural Networks for Fault Detection on Tennessee Eastman Chemical Processes0
Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration0
BinaryBERT: Pushing the Limit of BERT Quantization0
Deep Collective Knowledge Distillation0
An Effective Information Theoretic Framework for Channel Pruning0
Distilling Inductive Bias: Knowledge Distillation Beyond Model Compression0
MobiSR: Efficient On-Device Super-Resolution through Heterogeneous Mobile Processors0
BioNetExplorer: Architecture-Space Exploration of Bio-Signal Processing Deep Neural Networks for Wearables0
Dynamic Probabilistic Pruning: Training sparse networks based on stochastic and dynamic masking0
Decoupling Weight Regularization from Batch Size for Model Compression0
Show:102550
← PrevPage 39 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified