SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 12111220 of 1356 papers

TitleStatusHype
Creating Lightweight Object Detectors with Model Compression for Deployment on Edge Devices0
26ms Inference Time for ResNet-50: Towards Real-Time Execution of all DNNs on Smartphone0
Toward Extremely Low Bit and Lossless Accuracy in DNNs with Progressive ADMM0
Double Viterbi: Weight Encoding for High Compression Ratio and Fast On-Chip Reconstruction for Deep Neural Network0
Selective Convolutional Units: Improving CNNs via Channel Selectivity0
Model Compression with Generative Adversarial Networks0
N-Ary Quantization for CNN Model Compression and Inference Acceleration0
Integral Pruning on Activations and Weights for Efficient Neural Networks0
Towards Efficient Model Compression via Learned Global RankingCode0
Conditional Teacher-Student Learning0
Show:102550
← PrevPage 122 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified