SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 681690 of 1356 papers

TitleStatusHype
Rotation Invariant Quantization for Model CompressionCode0
Adversarial Attacks on Machine Learning in Embedded and IoT Platforms0
Towards domain generalisation in ASR with elitist sampling and ensemble knowledge distillation0
Debiased Distillation by Transplanting the Last Layer0
Structured Bayesian Compression for Deep Neural Networks Based on The Turbo-VBI Approach0
HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers0
A Comprehensive Review and a Taxonomy of Edge Machine Learning: Requirements, Paradigms, and Techniques0
Towards Optimal Compression: Joint Pruning and Quantization0
On Achieving Privacy-Preserving State-of-the-Art Edge Intelligence0
Knowledge Distillation in Vision Transformers: A Critical Review0
Show:102550
← PrevPage 69 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified