SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 311320 of 1356 papers

TitleStatusHype
Gradual Channel Pruning while Training using Feature Relevance Scores for Convolutional Neural NetworksCode0
From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model CompressionCode0
GASL: Guided Attention for Sparsity Learning in Deep Neural NetworksCode0
Generalizing Teacher Networks for Effective Knowledge Distillation Across Student ArchitecturesCode0
GSB: Group Superposition Binarization for Vision Transformer with Limited Training SamplesCode0
A Tunable Robust Pruning Framework Through Dynamic Network Rewiring of DNNsCode0
Cross-lingual Distillation for Text ClassificationCode0
FLoCoRA: Federated learning compression with low-rank adaptationCode0
Attribution-guided Pruning for Compression, Circuit Discovery, and Targeted Correction in LLMsCode0
FLAT-LLM: Fine-grained Low-rank Activation Space Transformation for Large Language Model CompressionCode0
Show:102550
← PrevPage 32 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified