SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 601610 of 1356 papers

TitleStatusHype
An Efficient Method of Training Small Models for Regression Problems with Knowledge Distillation0
Distilling Inductive Bias: Knowledge Distillation Beyond Model Compression0
BinaryBERT: Pushing the Limit of BERT Quantization0
An Effective Information Theoretic Framework for Channel Pruning0
AdaKD: Dynamic Knowledge Distillation of ASR models using Adaptive Loss Weighting0
Accelerating Deep Learning with Dynamic Data Pruning0
DopQ-ViT: Towards Distribution-Friendly and Outlier-Aware Post-Training Quantization for Vision Transformers0
2-bit Model Compression of Deep Convolutional Neural Network on ASIC Engine for Image Retrieval0
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications0
Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures0
Show:102550
← PrevPage 61 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified