SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 12211230 of 1356 papers

TitleStatusHype
Survey of Dropout Methods for Deep Neural Networks0
Model Compression with Multi-Task Knowledge Distillation for Web-scale Question Answering System0
Compression and Localization in Reinforcement Learning for ATARI Games0
Matrix and tensor decompositions for training binary neural networks0
Shakeout: A New Approach to Regularized Deep Neural Network TrainingCode0
3DQ: Compact Quantized Neural Networks for Volumetric Whole Brain Segmentation0
Model Slicing for Supporting Complex Analytics with Elastic Inference Cost and Resource ConstraintsCode0
Adversarial Robustness vs Model Compression, or Both?Code0
Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMMCode0
Real time backbone for semantic segmentation0
Show:102550
← PrevPage 123 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified