SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 461470 of 1356 papers

TitleStatusHype
Slicing Mutual Information Generalization Bounds for Neural NetworksCode0
Enhancing In-Context Learning Performance with just SVD-Based Weight Pruning: A Theoretical PerspectiveCode0
Reweighted Solutions for Weighted Low Rank Approximation0
Towards Efficient Deep Spiking Neural Networks Construction with Spiking Activity based Pruning0
Robust Knowledge Distillation Based on Feature Variance Against Backdoored Teacher ModelCode0
Effective Interplay between Sparsity and Quantization: From Theory to Practice0
LCQ: Low-Rank Codebook based Quantization for Large Language Models0
Dual sparse training framework: inducing activation map sparsity via Transformed 1 regularization0
Occam Gradient DescentCode0
subMFL: Compatiple subModel Generation for Federated Learning in Device Heterogenous EnvironmentCode0
Show:102550
← PrevPage 47 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified