SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 501550 of 1356 papers

TitleStatusHype
Explainability-Driven Leaf Disease Classification Using Adversarial Training and Knowledge Distillation0
Explaining Sequence-Level Knowledge Distillation as Data-Augmentation for Neural Machine Translation0
Comprehensive Survey of Model Compression and Speed up for Vision Transformers0
Exploiting Domain Knowledge via Grouped Weight Sharing with Application to Text Categorization0
Are We There Yet? A Measurement Study of Efficiency for LLM Applications on Mobile Devices0
Exploiting Non-Linear Redundancy for Neural Model Compression0
GeneCAI: Genetic Evolution for Acquiring Compact AI0
Exploration and Estimation for Model Compression0
GQSA: Group Quantization and Sparsity for Accelerating Large Language Model Inference0
Artemis: HE-Aware Training for Efficient Privacy-Preserving Machine Learning0
Data-Driven Compression of Convolutional Neural Networks0
A Unified Knowledge Distillation Framework for Deep Directed Graphical Models0
DarkRank: Accelerating Deep Metric Learning via Cross Sample Similarities Transfer0
DARC: Differentiable ARchitecture Compression0
A Unified Framework of DNN Weight Pruning and Weight Clustering/Quantization Using ADMM0
Aligned Weight Regularizers for Pruning Pretrained Neural Networks0
DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks0
D^2MoE: Dual Routing and Dynamic Scheduling for Efficient On-Device MoE-based LLM Serving0
A Unified Approximation Framework for Compressing and Accelerating Deep Neural Networks0
CURing Large Models: Compression via CUR Decomposition0
CSTAR: Towards Compact and STructured Deep Neural Networks with Adversarial Robustness0
Augmenting Knowledge Distillation With Peer-To-Peer Mutual Learning For Model Compression0
Artificial Neural Networks for Photonic Applications: From Algorithms to Implementation0
CrossQuant: A Post-Training Quantization Method with Smaller Quantization Kernel for Precise Large Language Model Compression0
Deep Face Recognition Model Compression via Knowledge Transfer and Distillation0
From Large to Super-Tiny: End-to-End Optimization for Cost-Efficient LLMs0
Cross Domain Model Compression by Structurally Weight Sharing0
Inferring ECG from PPG for Continuous Cardiac Monitoring Using Lightweight Neural Network0
From Word Vectors to Multimodal Embeddings: Techniques, Applications, and Future Directions For Large Language Models0
Cross-Channel Intragroup Sparsity Neural Network0
Croesus: Multi-Stage Processing and Transactions for Video-Analytics in Edge-Cloud Systems0
Attention Sinks and Outlier Features: A 'Catch, Tag, and Release' Mechanism for Embeddings0
Creating Lightweight Object Detectors with Model Compression for Deployment on Edge Devices0
CPTQuant -- A Novel Mixed Precision Post-Training Quantization Techniques for Large Language Models0
ALF: Autoencoder-based Low-rank Filter-sharing for Efficient Convolutional Neural Networks0
AACP: Model Compression by Accurate and Automatic Channel Pruning0
Frustratingly Easy Model Ensemble for Abstractive Summarization0
FSCNN: A Fast Sparse Convolution Neural Network Inference System0
Atrial Fibrillation Detection Using Weight-Pruned, Log-Quantised Convolutional Neural Networks0
“Learning-Compression” Algorithms for Neural Net Pruning0
Integrating Fairness and Model Pruning Through Bi-level Optimization0
CoSurfGS:Collaborative 3D Surface Gaussian Splatting with Distributed Learning for Large Scene Reconstruction0
Atomic Compression Networks0
Fragile Mastery: Are Domain-Specific Trade-Offs Undermining On-Device Language Models?0
Atleus: Accelerating Transformers on the Edge Enabled by 3D Heterogeneous Manycore Architectures0
Cosine Similarity Knowledge Distillation for Individual Class Information Transfer0
Spike-and-slab shrinkage priors for structurally sparse Bayesian neural networks0
CORSD: Class-Oriented Relational Self Distillation0
A Theoretical Understanding of Neural Network Compression from Sparse Linear Approximation0
A Half-Space Stochastic Projected Gradient Method for Group Sparsity Regularization0
Show:102550
← PrevPage 11 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified