SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 221230 of 1356 papers

TitleStatusHype
Smooth Model Compression without Fine-Tuning0
INSIGHT: A Survey of In-Network Systems for Intelligent, High-Efficiency AI and Topology Optimization0
FLAT-LLM: Fine-grained Low-rank Activation Space Transformation for Large Language Model CompressionCode0
Effective and Efficient One-pass Compression of Speech Foundation Models Using Sparsity-aware Self-pinching Gates0
ResSVD: Residual Compensated SVD for Large Language Model Compression0
Small Language Models: Architectures, Techniques, Evaluation, Problems and Future Adaptation0
Efficient Speech Translation through Model Compression and Knowledge DistillationCode0
Tensorization is a powerful but underexplored tool for compression and interpretability of neural networks0
Pangu Light: Weight Re-Initialization for Pruning and Accelerating LLMs0
Making deep neural networks work for medical audio: representation, compression and domain adaptation0
Show:102550
← PrevPage 23 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified