SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 12411250 of 1356 papers

TitleStatusHype
Understanding the Effect of Model Compression on Social Bias in Large Language ModelsCode0
Few Shot Network Compression via Cross DistillationCode0
LIT: Learned Intermediate Representation Training for Model CompressionCode0
Lottery Aware Sparsity Hunting: Enabling Federated Learning on Resource-Limited EdgeCode0
Tiny Models are the Computational Saver for Large ModelsCode0
Finding Deviated Behaviors of the Compressed DNN Models for Image ClassificationsCode0
Rotation Invariant Quantization for Model CompressionCode0
Distilled Pruning: Using Synthetic Data to Win the LotteryCode0
Faithful Label-free Knowledge DistillationCode0
Paraphrasing Complex Network: Network Compression via Factor TransferCode0
Show:102550
← PrevPage 125 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified