SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 11011110 of 1356 papers

TitleStatusHype
Performance Aware Convolutional Neural Network Channel Pruning for Embedded GPUs0
Variational Bayesian QuantizationCode1
PCNN: Pattern-based Fine-Grained Regular Pruning towards Optimizing CNN Accelerators0
Understanding and Improving Knowledge Distillation0
BERT-of-Theseus: Compressing BERT by Progressive Module ReplacingCode1
Lightweight Convolutional Representations for On-Device Natural Language Processing0
Search for Better Students to Learn Distilled Knowledge0
MT-BioNER: Multi-task Learning for Biomedical Named Entity Recognition using Deep Bidirectional Transformers0
Small, Accurate, and Fast Vehicle Re-ID on the Edge: the SAFR Approach0
SS-Auto: A Single-Shot, Automatic Structured Weight Pruning Framework of DNNs with Ultra-High Efficiency0
Show:102550
← PrevPage 111 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified