SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 11911200 of 1356 papers

TitleStatusHype
How does topology influence gradient propagation and model performance of deep networks with DenseNet-type skip connections?Code0
Language Model Knowledge Distillation for Efficient Question Answering in SpanishCode0
Universal approximation and model compression for radial neural networksCode0
Large Multimodal Model Compression via Efficient Pruning and Distillation at AntGroupCode0
Resource Constrained Model Compression via Minimax Optimization for Spiking Neural NetworksCode0
Data Efficient Stagewise Knowledge DistillationCode0
Online Ensemble Model Compression using Knowledge DistillationCode0
Distilling Universal and Joint Knowledge for Cross-Domain Model Compression on Time Series DataCode0
Compressed Object DetectionCode0
The Shallow End: Empowering Shallower Deep-Convolutional Networks through Auxiliary OutputsCode0
Show:102550
← PrevPage 120 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified