SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 341350 of 1356 papers

TitleStatusHype
SVD-LLM: Truncation-aware Singular Value Decomposition for Large Language Model CompressionCode3
Maxwell's Demon at Work: Efficient Pruning by Leveraging Saturation of Neurons0
Enhanced Sparsification via Stimulative Training0
Bit-mask Robust Contrastive Knowledge Distillation for Unsupervised Semantic HashingCode1
Optimal Policy Sparsification and Low Rank Decomposition for Deep Reinforcement Learning0
Towards efficient deep autoencoders for multivariate time series anomaly detection0
DyCE: Dynamically Configurable Exiting for Deep Learning Compression and Real-time ScalingCode0
"Lossless" Compression of Deep Neural Networks: A High-dimensional Neural Tangent Kernel ApproachCode1
Differentially Private Knowledge Distillation via Synthetic Text GenerationCode0
PromptMM: Multi-Modal Knowledge Distillation for Recommendation with Prompt-TuningCode2
Show:102550
← PrevPage 35 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified