SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 12511275 of 1356 papers

TitleStatusHype
Decoupling Weight Regularization from Batch Size for Model Compression0
Deep Collective Knowledge Distillation0
Channel Compression: Rethinking Information Redundancy among Channels in CNN Architecture0
Deep Compression of Neural Networks for Fault Detection on Tennessee Eastman Chemical Processes0
10K is Enough: An Ultra-Lightweight Binarized Network for Infrared Small-Target Detection0
DEEPEYE: A Compact and Accurate Video Comprehension at Terminal Devices Compressed with Quantization and Tensorization0
You Only Prune Once: Designing Calibration-Free Model Compression With Policy Learning0
Deep learning model compression using network sensitivity and gradients0
Strategic Fusion Optimizes Transformer Compression0
Deep Model Compression based on the Training History0
Deep Model Compression: Distilling Knowledge from Noisy Teachers0
Deep Model Compression Via Two-Stage Deep Reinforcement Learning0
Neural Epitome Search for Architecture-Agnostic Network Compression0
Streamlining Tensor and Network Pruning in PyTorch0
DeepRebirth: Accelerating Deep Neural Network Execution on Mobile Devices0
Extending DeepSDF for automatic 3D shape retrieval and similarity transform estimation0
Structured Bayesian Compression for Deep Neural Networks Based on The Turbo-VBI Approach0
DeepTwist: Learning Model Compression via Occasional Weight Distortion0
DeGAN : Data-Enriching GAN for Retrieving Representative Samples from a Trained Classifier0
Delving Deep into Semantic Relation Distillation0
Densely Distilling Cumulative Knowledge for Continual Learning0
Order of Compression: A Systematic and Optimal Sequence to Combinationally Compress CNN0
Dense Vision Transformer Compression with Few Samples0
Dependency-Aware Semi-Structured Sparsity of GLU Variants in Large Language Models0
Deploying Foundation Model Powered Agent Services: A Survey0
Show:102550
← PrevPage 51 of 55Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified