SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 551600 of 1356 papers

TitleStatusHype
Adaptive Neural Connections for Sparsity Learning0
Can Students Outperform Teachers in Knowledge Distillation based Model Compression?0
Dynamic Probabilistic Pruning: Training sparse networks based on stochastic and dynamic masking0
Dynamic Model Pruning with Feedback0
Can Students Beyond The Teacher? Distilling Knowledge from Teacher's Bias0
A "Network Pruning Network" Approach to Deep Model Compression0
Dynamically Hierarchy Revolution: DirNet for Compressing Recurrent Neural Network on Mobile Devices0
Can Model Compression Improve NLP Fairness0
An Empirical Study of Low Precision Quantization for TinyML0
Heterogeneous Federated Learning using Dynamic Model Pruning and Adaptive Gradient0
Accelerating deep neural networks for efficient scene understanding in automotive cyber-physical systems0
Dual sparse training framework: inducing activation map sparsity via Transformed 1 regularization0
Can collaborative learning be private, robust and scalable?0
Dual Discriminator Adversarial Distillation for Data-free Model Compression0
CAIT: Triple-Win Compression towards High Accuracy, Fast Inference, and Favorable Transferability For ViTs0
Stochastic Model Pruning via Weight Dropping Away and Back0
Dreaming To Prune Image Deraining Networks0
Multihop: Leveraging Complex Models to Learn Accurate Simple Models0
Dream Distillation: A Data-Independent Model Compression Framework0
Bringing AI To Edge: From Deep Learning's Perspective0
An Empirical Investigation of Matrix Factorization Methods for Pre-trained Transformers0
Adapting Models to Signal Degradation using Distillation0
Double Viterbi: Weight Encoding for High Compression Ratio and Fast On-Chip Reconstruction for Deep Neural Network0
Don't encrypt the data; just approximate the model \ Towards Secure Transaction and Fair Pricing of Training Data0
BRIEDGE: EEG-Adaptive Edge AI for Multi-Brain to Multi-Robot Interaction0
Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing Performance0
Domain Generalization on Efficient Acoustic Scene Classification using Residual Normalization0
Bridging the Resource Gap: Deploying Advanced Imitation Learning Models onto Affordable Embedded Platforms0
A Multi-objective Complex Network Pruning Framework Based on Divide-and-conquer and Global Performance Impairment Ranking0
Domain Adaptation Regularization for Spectral Pruning0
Does Learning Require Memorization? A Short Tale about a Long Tail0
DNN Model Compression Under Accuracy Constraints0
DNA data storage, sequencing data-carrying DNA0
Bridging the Gap Between Foundation Models and Heterogeneous Federated Learning0
An Embedded Deep Learning Object Detection Model For Traffic In Asian Countries0
AdapMTL: Adaptive Pruning Framework for Multitask Learning Model0
DMT: Comprehensive Distillation with Multiple Self-supervised Teachers0
DLIP: Distilling Language-Image Pre-training0
Boosting Graph Neural Networks via Adaptive Knowledge Distillation0
DKM: Differentiable K-Means Clustering Layer for Neural Network Compression0
Divergent Token Metrics: Measuring degradation to prune away LLM components -- and optimize quantization0
Block-wise Intermediate Representation Training for Model Compression0
Distributed Low Precision Training Without Mixed Precision0
Distilling with Performance Enhanced Students0
Block Skim Transformer for Efficient Question Answering0
Distilling Spikes: Knowledge Distillation in Spiking Neural Networks0
Blending LSTMs into CNNs0
An Efficient Sparse Inference Software Accelerator for Transformer-based Language Models on CPUs0
Distilling Optimal Neural Networks: Rapid Search in Diverse Spaces0
BioNetExplorer: Architecture-Space Exploration of Bio-Signal Processing Deep Neural Networks for Wearables0
Show:102550
← PrevPage 12 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified