SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 11511160 of 1356 papers

TitleStatusHype
Graph Pruning for Model Compression0
Few Shot Network Compression via Cross DistillationCode0
DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks0
On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep LearningCode0
Distributed Low Precision Training Without Mixed Precision0
ASCAI: Adaptive Sampling for acquiring Compact AI0
Data Efficient Stagewise Knowledge DistillationCode0
What Do Compressed Deep Neural Networks Forget?Code0
A Computing Kernel for Network Binarization on PyTorchCode0
SubCharacter Chinese-English Neural Machine Translation with Wubi encoding0
Show:102550
← PrevPage 116 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified