SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 12811290 of 1356 papers

TitleStatusHype
Differentiable Feature Aggregation Search for Knowledge Distillation0
Differentiable Mask for Pruning Convolutional and Recurrent Networks0
Can We Find Strong Lottery Tickets in Generative Models?0
Differentiable Network Pruning for Microcontrollers0
Differentiable Sparsification for Deep Neural Networks0
Differentiable Sparsification for Deep Neural Networks0
Structured Compression by Weight Encryption for Unstructured Pruning and Quantization0
Differentially Private Model Compression0
Differential Privacy Meets Federated Learning under Communication Constraints0
Can Students Outperform Teachers in Knowledge Distillation based Model Compression?0
Show:102550
← PrevPage 129 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified