SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 12011225 of 1356 papers

TitleStatusHype
Weight Normalization based Quantization for Deep Neural Network Compression0
COP: Customized Deep Model Compression via Regularized Correlation-Based Filter-Level PruningCode0
Joint Regularization on Activations and Weights for Efficient Neural Network Pruning0
Scalable Model Compression by Entropy Penalized Reparameterization0
Membership Privacy for Machine Learning Models Through Knowledge Transfer0
Does Learning Require Memorization? A Short Tale about a Long Tail0
Network Implosion: Effective Model Compression for ResNets via Static Layer Pruning and Retraining0
Deep Face Recognition Model Compression via Knowledge Transfer and Distillation0
Compressing Convolutional Neural Networks via Factorized Convolutional FiltersCode0
Cross Domain Model Compression by Structurally Weight Sharing0
Multi-Precision Quantized Neural Networks via Encoding Decomposition of -1 and +10
HadaNets: Flexible Quantization Strategies for Neural Networks0
Bayesian Tensorized Neural Networks with Automatic Rank SelectionCode0
Learning Low-Rank Approximation for CNNs0
Structured Compression by Weight Encryption for Unstructured Pruning and Quantization0
DARC: Differentiable ARchitecture Compression0
Compressed Learning of Deep Neural Networks for OpenCL-Capable Embedded SystemsCode0
Dream Distillation: A Data-Independent Model Compression Framework0
Network Pruning for Low-Rank Binary Indexing0
Play and Prune: Adaptive Filter Pruning for Deep Model CompressionCode0
2-bit Model Compression of Deep Convolutional Neural Network on ASIC Engine for Image Retrieval0
Creating Lightweight Object Detectors with Model Compression for Deployment on Edge Devices0
26ms Inference Time for ResNet-50: Towards Real-Time Execution of all DNNs on Smartphone0
Toward Extremely Low Bit and Lossless Accuracy in DNNs with Progressive ADMM0
Double Viterbi: Weight Encoding for High Compression Ratio and Fast On-Chip Reconstruction for Deep Neural Network0
Show:102550
← PrevPage 49 of 55Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified