SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 11411150 of 1356 papers

TitleStatusHype
Tensorization of neural networks for improved privacy and interpretabilityCode0
Network Pruning via Performance MaximizationCode0
Is Modularity Transferable? A Case Study through the Lens of Knowledge DistillationCode0
Tensorized Embedding Layers for Efficient Model CompressionCode0
APSQ: Additive Partial Sum Quantization with Algorithm-Hardware Co-DesignCode0
Neural Architecture Codesign for Fast Physics ApplicationsCode0
Iterative Filter Pruning for Concatenation-based CNN ArchitecturesCode0
TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLPCode0
JavaScript Convolutional Neural Networks for Keyword Spotting in the Browser: An Experimental AnalysisCode0
Image Classification with CondenseNeXt for ARM-Based Computing PlatformsCode0
Show:102550
← PrevPage 115 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified