SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 201210 of 1356 papers

TitleStatusHype
The State of Sparsity in Deep Neural NetworksCode1
Learned Step Size QuantizationCode1
ADMM-NN: An Algorithm-Hardware Co-Design Framework of DNNs Using Alternating Direction Method of MultipliersCode1
Discrimination-aware Channel Pruning for Deep Neural NetworksCode1
Dynamic Channel Pruning: Feature Boosting and SuppressionCode1
Verifiable Reinforcement Learning via Policy ExtractionCode1
To prune, or not to prune: exploring the efficacy of pruning for model compressionCode1
Ternary Weight NetworksCode1
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model sizeCode1
LINR-PCGC: Lossless Implicit Neural Representations for Point Cloud Geometry Compression0
Show:102550
← PrevPage 21 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified