SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 12111220 of 1356 papers

TitleStatusHype
StrassenNets: Deep Learning with a Multiplication BudgetCode0
Application Specific Compression of Deep Learning ModelsCode0
Gradual Channel Pruning while Training using Feature Relevance Scores for Convolutional Neural NetworksCode0
Annealing Knowledge DistillationCode0
On the Utility of Gradient Compression in Distributed Training SystemsCode0
Distilling Model KnowledgeCode0
Learning Intrinsic Sparse Structures within Long Short-Term MemoryCode0
Understanding and Improving Knowledge Distillation for Quantization-Aware Training of Large Transformer EncodersCode0
ThreshNet: An Efficient DenseNet Using Threshold Mechanism to Reduce ConnectionsCode0
TQCompressor: improving tensor decomposition methods in neural networks via permutationsCode0
Show:102550
← PrevPage 122 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified