SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 891900 of 1356 papers

TitleStatusHype
Compact CNN Structure Learning by Knowledge Distillation0
Augmenting Deep Classifiers with Polynomial Neural NetworksCode0
Annealing Knowledge DistillationCode0
Dual Discriminator Adversarial Distillation for Data-free Model Compression0
Reversible Watermarking in Deep Convolutional Neural Networks for Integrity Authentication0
Efficient Personalized Speech Enhancement through Self-Supervised Learning0
Model Compression for Dynamic Forecast CombinationCode0
Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation0
Deep Compression for PyTorch Model Deployment on MicrocontrollersCode1
Shrinking Bigfoot: Reducing wav2vec 2.0 footprint0
Show:102550
← PrevPage 90 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified