SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 12211230 of 1356 papers

TitleStatusHype
Generalizing Teacher Networks for Effective Knowledge Distillation Across Student ArchitecturesCode0
Trainable pruned ternary quantization for medical signal classification modelsCode0
Comprehensive SNN Compression Using ADMM Optimization and Activity RegularizationCode0
A Computing Kernel for Network Binarization on PyTorchCode0
Robust and Large-Payload DNN Watermarking via Fixed, Distribution-Optimized, WeightsCode0
Towards Efficient Model Compression via Learned Global RankingCode0
GASL: Guided Attention for Sparsity Learning in Deep Neural NetworksCode0
Robust Knowledge Distillation Based on Feature Variance Against Backdoored Teacher ModelCode0
Robust Model Compression Using Deep HypothesesCode0
From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model CompressionCode0
Show:102550
← PrevPage 123 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified