SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 501525 of 1356 papers

TitleStatusHype
Is Modularity Transferable? A Case Study through the Lens of Knowledge DistillationCode0
Dense Vision Transformer Compression with Few Samples0
Are Compressed Language Models Less Subgroup Robust?Code0
Tiny Models are the Computational Saver for Large ModelsCode0
Order of Compression: A Systematic and Optimal Sequence to Combinationally Compress CNN0
Magic for the Age of Quantized DNNs0
Advancing IIoT with Over-the-Air Federated Learning: The Role of Iterative Magnitude Pruning0
DiPaCo: Distributed Path Composition0
BRIEDGE: EEG-Adaptive Edge AI for Multi-Brain to Multi-Robot Interaction0
Adversarial Fine-tuning of Compressed Neural Networks for Joint Improvement of Robustness and EfficiencyCode0
Maxwell's Demon at Work: Efficient Pruning by Leveraging Saturation of Neurons0
Enhanced Sparsification via Stimulative Training0
Optimal Policy Sparsification and Low Rank Decomposition for Deep Reinforcement Learning0
DyCE: Dynamically Configurable Exiting for Deep Learning Compression and Real-time ScalingCode0
Towards efficient deep autoencoders for multivariate time series anomaly detection0
Differentially Private Knowledge Distillation via Synthetic Text GenerationCode0
Model Compression Method for S4 with Diagonal State Space Layers using Balanced Truncation0
FinGPT-HPC: Efficient Pretraining and Finetuning Large Language Models for Financial Applications with High-Performance Computing0
From Cloud to Edge: Rethinking Generative AI for Low-Resource Design Challenges0
Towards a tailored mixed-precision sub-8-bit quantization scheme for Gated Recurrent Units using Genetic Algorithms0
Extraction of nonlinearity in neural networks with Koopman operator0
Model Compression and Efficient Inference for Large Language Models: A Survey0
Bayesian Deep Learning Via Expectation Maximization and Turbo Deep Approximate Message Passing0
Memory-Efficient Vision Transformers: An Activation-Aware Mixed-Rank Compression Strategy0
L4Q: Parameter Efficient Quantization-Aware Fine-Tuning on Large Language Models0
Show:102550
← PrevPage 21 of 55Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified