SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 13511356 of 1356 papers

TitleStatusHype
To Know Where We Are: Vision-Based Positioning in Outdoor Environments0
A Scale Mixture Perspective of Multiplicative Noise in Neural Networks0
Accelerating Very Deep Convolutional Networks for Classification and Detection0
Unsupervised model compression for multilayer bootstrap networks0
Speeding up Convolutional Neural Networks with Low Rank Expansions0
Efficient classification using parallel and scalable compressed model and Its application on intrusion detection0
Show:102550
← PrevPage 28 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified