SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 351400 of 1356 papers

TitleStatusHype
DeepTwist: Learning Model Compression via Occasional Weight Distortion0
DeGAN : Data-Enriching GAN for Retrieving Representative Samples from a Trained Classifier0
Delving Deep into Semantic Relation Distillation0
Densely Distilling Cumulative Knowledge for Continual Learning0
Deep Model Compression Via Two-Stage Deep Reinforcement Learning0
Deep Model Compression: Distilling Knowledge from Noisy Teachers0
Deep Model Compression based on the Training History0
Deploying Foundation Model Powered Agent Services: A Survey0
A Web-Based Solution for Federated Learning with LLM-Based Automation0
Design and Prototyping Distributed CNN Inference Acceleration in Edge Computing0
Design Automation for Fast, Lightweight, and Effective Deep Learning Models: A Survey0
Developing Far-Field Speaker System Via Teacher-Student Learning0
Differentiable Architecture Compression0
Differentiable Feature Aggregation Search for Knowledge Distillation0
Differentiable Mask for Pruning Convolutional and Recurrent Networks0
Edge-First Language Model Inference: Models, Metrics, and Tradeoffs0
Differentiable Network Pruning for Microcontrollers0
Differentiable Sparsification for Deep Neural Networks0
AutoCompress: An Automatic DNN Structured Pruning Framework for Ultra-High Compression Rates0
Differentiable Sparsification for Deep Neural Networks0
Deep learning model compression using network sensitivity and gradients0
Differential Privacy Meets Federated Learning under Communication Constraints0
EDCompress: Energy-Aware Model Compression for Dataflows0
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a Survey0
DiPaCo: Distributed Path Composition0
DipSVD: Dual-importance Protected SVD for Efficient LLM Compression0
AMD: Adaptive Masked Distillation for Object Detection0
Discrete Model Compression With Resource Constraint for Deep Neural Networks0
DEEPEYE: A Compact and Accurate Video Comprehension at Terminal Devices Compressed with Quantization and Tensorization0
Activation Density based Mixed-Precision Quantization for Energy Efficient Neural Networks0
Edge AI: Evaluation of Model Compression Techniques for Convolutional Neural Networks0
Automatic Mixed-Precision Quantization Search of BERT0
An Effective Information Theoretic Framework for Channel Pruning0
Deep Compression of Neural Networks for Fault Detection on Tennessee Eastman Chemical Processes0
AdaKD: Dynamic Knowledge Distillation of ASR models using Adaptive Loss Weighting0
Distilling Inductive Bias: Knowledge Distillation Beyond Model Compression0
Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration0
BioNetExplorer: Architecture-Space Exploration of Bio-Signal Processing Deep Neural Networks for Wearables0
Deep Collective Knowledge Distillation0
MobiSR: Efficient On-Device Super-Resolution through Heterogeneous Mobile Processors0
Edge-AI for Agriculture: Lightweight Vision Models for Disease Detection in Resource-Limited Settings0
Edge-MultiAI: Multi-Tenancy of Latency-Sensitive Deep Learning Applications on Edge0
Distilling with Performance Enhanced Students0
Distributed Low Precision Training Without Mixed Precision0
Divergent Token Metrics: Measuring degradation to prune away LLM components -- and optimize quantization0
DKM: Differentiable K-Means Clustering Layer for Neural Network Compression0
DLIP: Distilling Language-Image Pre-training0
DMT: Comprehensive Distillation with Multiple Self-supervised Teachers0
DNA data storage, sequencing data-carrying DNA0
Decoupling Weight Regularization from Batch Size for Model Compression0
Show:102550
← PrevPage 8 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified