SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 821830 of 1356 papers

TitleStatusHype
Training Thinner and Deeper Neural Networks: Jumpstart RegularizationCode0
AutoMC: Automated Model Compression based on Domain Knowledge and Progressive search strategyCode0
Enabling Deep Learning on Edge Devices through Filter Pruning and Knowledge Transfer0
Can Model Compression Improve NLP Fairness0
AutoDistill: an End-to-End Framework to Explore and Distill Hardware-Efficient Language Models0
High-fidelity 3D Model Compression based on Key SpheresCode0
PCEE-BERT: Accelerating BERT Inference via Patient and Confident Early Exiting0
UDC: Unified DNAS for Compressible TinyML Models0
DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI ScaleCode0
ThreshNet: An Efficient DenseNet Using Threshold Mechanism to Reduce ConnectionsCode0
Show:102550
← PrevPage 83 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified