SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 651660 of 1356 papers

TitleStatusHype
QUIDAM: A Framework for Quantization-Aware DNN Accelerator and Model Co-Exploration0
Language model compression with weighted low-rank factorization0
QTI Submission to DCASE 2021: residual normalization for device-imbalanced acoustic scene classification with efficient design0
Fundamental Limits of Communication Efficiency for Model Aggregation in Distributed Learning: A Rate-Distortion Approach0
Representative Teacher Keys for Knowledge Distillation Model Compression Based on Attention Mechanism for Image Classification0
An Automatic and Efficient BERT Pruning for Edge AI Systems0
Knowledge Distillation for Oriented Object Detection on Aerial Images0
Revisiting Self-Distillation0
Accelerating Inference and Language Model Fusion of Recurrent Neural Network Transducers via End-to-End 4-bit Quantization0
Atrial Fibrillation Detection Using Weight-Pruned, Log-Quantised Convolutional Neural Networks0
Show:102550
← PrevPage 66 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified