SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 10011050 of 1356 papers

TitleStatusHype
Privacy-Preserving SAM Quantization for Efficient Edge Intelligence in Healthcare0
Private Model Compression via Knowledge Distillation0
Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models0
Inferring ECG from PPG for Continuous Cardiac Monitoring Using Lightweight Neural Network0
Progressive Weight Pruning of Deep Neural Networks using ADMM0
Pro-KD: Progressive Distillation by Following the Footsteps of the Teacher0
ALF: Autoencoder-based Low-rank Filter-sharing for Efficient Convolutional Neural Networks0
“Learning-Compression” Algorithms for Neural Net Pruning0
Prototype-based Personalized Pruning0
Prototypical Contrastive Predictive Coding0
Provable Benefits of Overparameterization in Model Compression: From Double Descent to Pruning Neural Networks0
Training Acceleration of Low-Rank Decomposed Networks using Sequential Freezing and Rank Quantization0
Structured Pruning of a BERT-based Question Answering Model0
Pruning Algorithms to Accelerate Convolutional Neural Networks for Edge Applications: A Survey0
Pruning at a Glance: A Structured Class-Blind Pruning Technique for Model Compression0
Pruning at a Glance: Global Neural Pruning for Model Compression0
What is Left After Distillation? How Knowledge Transfer Impacts Fairness and Bias0
A Half-Space Stochastic Projected Gradient Method for Group Sparsity Regularization0
What is Lost in Knowledge Distillation?0
Pruning Large Language Models via Accuracy Predictor0
Aggressive Post-Training Compression on Extremely Large Language Models0
Pruning Ternary Quantization0
AfroXLMR-Comet: Multilingual Knowledge Distillation with Attention Matching for Low-Resource languages0
AACP: Model Compression by Accurate and Automatic Channel Pruning0
A flexible, extensible software framework for model compression based on the LC algorithm0
Puppet-CNN: Input-Adaptive Convolutional Neural Networks with Model Compression using Ordinary Differential Equation0
PURSUhInT: In Search of Informative Hint Points Based on Layer Clustering for Knowledge Distillation0
Aerial Image Classification in Scarce and Unconstrained Environments via Conformal Prediction0
QAPPA: Quantization-Aware Power, Performance, and Area Modeling of DNN Accelerators0
QD-BEV : Quantization-aware View-guided Distillation for Multi-view 3D Object Detection0
Adversarially Robust and Explainable Model Compression with On-Device Personalization for Text Classification0
NPAS: A Compiler-aware Framework of Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile Acceleration0
Q-MambaIR: Accurate Quantized Mamba for Efficient Image Restoration0
Adversarial Attacks on Machine Learning in Embedded and IoT Platforms0
QTI Submission to DCASE 2021: residual normalization for device-imbalanced acoustic scene classification with efficient design0
T-RECX: Tiny-Resource Efficient Convolutional neural networks with early-eXit0
Quantizing YOLOv7: A Comprehensive Study0
Quantum Neural Network Compression0
Advancing IIoT with Over-the-Air Federated Learning: The Role of Iterative Magnitude Pruning0
QuickNet: Maximizing Efficiency and Efficacy in Deep Architectures0
QUIDAM: A Framework for Quantization-Aware DNN Accelerator and Model Co-Exploration0
Quiver neural networks0
QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning0
R2 Loss: Range Restriction Loss for Model Compression and Quantization0
RADIN: Souping on a Budget0
Radio: Rate-Distortion Optimization for Large Language Model Compression0
Random Conditioning for Diffusion Model Compression with Distillation0
Random Conditioning with Distillation for Data-Efficient Diffusion Model Compression0
Random Offset Block Embedding Array (ROBE) for CriteoTB Benchmark MLPerf DLRM Model : 1000 Compression and 3.1 Faster Inference0
RAND: Robustness Aware Norm Decay For Quantized Seq2seq Models0
Show:102550
← PrevPage 21 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified