SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 11011150 of 1356 papers

TitleStatusHype
Performance Aware Convolutional Neural Network Channel Pruning for Embedded GPUs0
Variational Bayesian QuantizationCode1
PCNN: Pattern-based Fine-Grained Regular Pruning towards Optimizing CNN Accelerators0
Understanding and Improving Knowledge Distillation0
BERT-of-Theseus: Compressing BERT by Progressive Module ReplacingCode1
Lightweight Convolutional Representations for On-Device Natural Language Processing0
Search for Better Students to Learn Distilled Knowledge0
MT-BioNER: Multi-task Learning for Biomedical Named Entity Recognition using Deep Bidirectional Transformers0
Small, Accurate, and Fast Vehicle Re-ID on the Edge: the SAFR Approach0
SS-Auto: A Single-Shot, Automatic Structured Weight Pruning Framework of DNNs with Ultra-High Efficiency0
A "Network Pruning Network" Approach to Deep Model Compression0
Discrimination-aware Network Pruning for Deep Model CompressionCode1
FedBoost: A Communication-Efficient Algorithm for Federated Learning0
ZeroQ: A Novel Zero Shot Quantization FrameworkCode1
PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with Pattern-based Weight Pruning0
Differentiable Architecture Compression0
DeGAN : Data-Enriching GAN for Retrieving Representative Samples from a Trained Classifier0
Domain Adaptation Regularization for Spectral Pruning0
Data-Free Adversarial DistillationCode0
Pruning by Explaining: A Novel Criterion for Deep Neural Network PruningCode0
Towards Building a Real Time Mobile Device Bird Counting System Through Synthetic Data Training and Model Compression0
An Improving Framework of regularization for Network Compression0
Explaining Sequence-Level Knowledge Distillation as Data-Augmentation for Neural Machine Translation0
Deep Model Compression Via Two-Stage Deep Reinforcement Learning0
The Knowledge Within: Methods for Data-Free Model Compression0
TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLPCode0
Exploring Unexplored Tensor Network Decompositions for Convolutional Neural NetworksCode0
Pruning at a Glance: Global Neural Pruning for Model Compression0
Data-Driven Compression of Convolutional Neural Networks0
Communication-Efficient Distributed Online Learning with Kernels0
Structured Multi-Hashing for Model Compression0
A SOT-MRAM-based Processing-In-Memory Engine for Highly Compressed DNN Implementation0
Graph Pruning for Model Compression0
Few Shot Network Compression via Cross DistillationCode0
On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep LearningCode0
DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks0
Distributed Low Precision Training Without Mixed Precision0
ASCAI: Adaptive Sampling for acquiring Compact AI0
Data Efficient Stagewise Knowledge DistillationCode0
Learning from a Teacher using Unlabeled DataCode1
What Do Compressed Deep Neural Networks Forget?Code0
A Computing Kernel for Network Binarization on PyTorchCode0
SubCharacter Chinese-English Neural Machine Translation with Wubi encoding0
A Programmable Approach to Neural Network CompressionCode0
Localization-aware Channel Pruning for Object Detection0
Comprehensive SNN Compression Using ADMM Optimization and Activity RegularizationCode0
Locality-Sensitive Hashing for f-Divergences: Mutual Information Loss and Beyond0
Cross-Channel Intragroup Sparsity Neural Network0
LPRNet: Lightweight Deep Network by Low-rank Pointwise Residual Convolution0
Contrastive Representation DistillationCode1
Show:102550
← PrevPage 23 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified