SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 10911100 of 1356 papers

TitleStatusHype
An Overview of Neural Network Compression0
Discrete Model Compression With Resource Constraint for Deep Neural Networks0
Multi-Dimensional Pruning: A Unified Framework for Model Compression0
Weight Squeezing: Reparameterization for Compression and Fast Inference0
CoDiNet: Path Distribution Modeling with Consistency and Diversity for Dynamic RoutingCode0
Exploiting Non-Linear Redundancy for Neural Model Compression0
VecQ: Minimal Loss DNN Model Compression With Vectorized Weight QuantizationCode0
A flexible, extensible software framework for model compression based on the LC algorithm0
PENNI: Pruned Kernel Sharing for Efficient CNN InferenceCode0
Compressing Recurrent Neural Networks Using Hierarchical Tucker Tensor Decomposition0
Show:102550
← PrevPage 110 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified