SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 11711180 of 1356 papers

TitleStatusHype
Deep Neural Network Compression for Image Classification and Object DetectionCode0
How does topology influence gradient propagation and model performance of deep networks with DenseNet-type skip connections?Code0
Adversarial Robustness vs. Model Compression, or Both?Code0
REQ-YOLO: A Resource-Aware, Efficient Quantization Framework for Object Detection on FPGAs0
Robust Membership Encoding: Inference Attacks and Copyright Protection for Deep Learning0
Extremely Small BERT Models from Mixed-Vocabulary Training0
Atomic Compression Networks0
Network Pruning for Low-Rank Binary Index0
GQ-Net: Training Quantization-Friendly Deep Networks0
Decoupling Weight Regularization from Batch Size for Model Compression0
Show:102550
← PrevPage 118 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified