SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 441450 of 1356 papers

TitleStatusHype
Robustness-Guided Image Synthesis for Data-Free Quantization0
Sparse Deep Learning for Time Series Data: Theory and Applications0
ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models0
Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation0
Artemis: HE-Aware Training for Efficient Privacy-Preserving Machine Learning0
Bridging the Gap Between Foundation Models and Heterogeneous Federated Learning0
Distilling Inductive Bias: Knowledge Distillation Beyond Model Compression0
CAIT: Triple-Win Compression towards High Accuracy, Fast Inference, and Favorable Transferability For ViTs0
On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''0
VIC-KD: Variance-Invariance-Covariance Knowledge Distillation to Make Keyword Spotting More Robust Against Adversarial Attacks0
Show:102550
← PrevPage 45 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified