SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 13211330 of 1356 papers

TitleStatusHype
Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing Performance0
Don't encrypt the data; just approximate the model \ Towards Secure Transaction and Fair Pricing of Training Data0
Double Viterbi: Weight Encoding for High Compression Ratio and Fast On-Chip Reconstruction for Deep Neural Network0
Bridging the Resource Gap: Deploying Advanced Imitation Learning Models onto Affordable Embedded Platforms0
Dream Distillation: A Data-Independent Model Compression Framework0
Dreaming To Prune Image Deraining Networks0
Stochastic Model Pruning via Weight Dropping Away and Back0
Bridging the Gap Between Foundation Models and Heterogeneous Federated Learning0
Dual Discriminator Adversarial Distillation for Data-free Model Compression0
Boosting Graph Neural Networks via Adaptive Knowledge Distillation0
Show:102550
← PrevPage 133 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified