SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 13011310 of 1356 papers

TitleStatusHype
Multi-Task Zipping via Layer-wise Neuron Sharing0
DEEPEYE: A Compact and Accurate Video Comprehension at Terminal Devices Compressed with Quantization and Tensorization0
Precise Box Score: Extract More Information from Datasets to Improve the Performance of Face Detection0
Developing Far-Field Speaker System Via Teacher-Student Learning0
Hybrid Binary Networks: Optimizing for Accuracy, Efficiency and MemoryCode0
Efficient Recurrent Neural Networks using Structured Matrices in FPGAs0
Interpreting Deep Classifier by Visual Distillation of Dark Knowledge0
Model compression via distillation and quantizationCode0
Paraphrasing Complex Network: Network Compression via Factor TransferCode0
Model compression for faster structural separation of macromolecules captured by Cellular Electron Cryo-Tomography0
Show:102550
← PrevPage 131 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified