SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 281290 of 1356 papers

TitleStatusHype
How does topology of neural architectures impact gradient propagation and model performance?Code0
Actor-Mimic: Deep Multitask and Transfer Reinforcement LearningCode0
HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain Language Model CompressionCode0
ImPart: Importance-Aware Delta-Sparsification for Improved Model Compression and Merging in LLMsCode0
Bayesian Tensorized Neural Networks with Automatic Rank SelectionCode0
Gradual Channel Pruning while Training using Feature Relevance Scores for Convolutional Neural NetworksCode0
GSB: Group Superposition Binarization for Vision Transformer with Limited Training SamplesCode0
Bayesian Optimization with Clustering and Rollback for CNN Auto PruningCode0
Generalizing Teacher Networks for Effective Knowledge Distillation Across Student ArchitecturesCode0
A Miniaturized Semantic Segmentation Method for Remote Sensing ImageCode0
Show:102550
← PrevPage 29 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified