SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 10011025 of 1356 papers

TitleStatusHype
Privacy-Preserving SAM Quantization for Efficient Edge Intelligence in Healthcare0
Private Model Compression via Knowledge Distillation0
Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models0
Inferring ECG from PPG for Continuous Cardiac Monitoring Using Lightweight Neural Network0
Progressive Weight Pruning of Deep Neural Networks using ADMM0
Pro-KD: Progressive Distillation by Following the Footsteps of the Teacher0
ALF: Autoencoder-based Low-rank Filter-sharing for Efficient Convolutional Neural Networks0
“Learning-Compression” Algorithms for Neural Net Pruning0
Prototype-based Personalized Pruning0
Prototypical Contrastive Predictive Coding0
Provable Benefits of Overparameterization in Model Compression: From Double Descent to Pruning Neural Networks0
Training Acceleration of Low-Rank Decomposed Networks using Sequential Freezing and Rank Quantization0
Structured Pruning of a BERT-based Question Answering Model0
Pruning Algorithms to Accelerate Convolutional Neural Networks for Edge Applications: A Survey0
Pruning at a Glance: A Structured Class-Blind Pruning Technique for Model Compression0
Pruning at a Glance: Global Neural Pruning for Model Compression0
What is Left After Distillation? How Knowledge Transfer Impacts Fairness and Bias0
A Half-Space Stochastic Projected Gradient Method for Group Sparsity Regularization0
What is Lost in Knowledge Distillation?0
Pruning Large Language Models via Accuracy Predictor0
Aggressive Post-Training Compression on Extremely Large Language Models0
Pruning Ternary Quantization0
AfroXLMR-Comet: Multilingual Knowledge Distillation with Attention Matching for Low-Resource languages0
AACP: Model Compression by Accurate and Automatic Channel Pruning0
A flexible, extensible software framework for model compression based on the LC algorithm0
Show:102550
← PrevPage 41 of 55Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified