SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 291300 of 1356 papers

TitleStatusHype
Robust Knowledge Distillation Based on Feature Variance Against Backdoored Teacher ModelCode0
LCQ: Low-Rank Codebook based Quantization for Large Language Models0
Effective Interplay between Sparsity and Quantization: From Theory to Practice0
Occam Gradient DescentCode0
Dual sparse training framework: inducing activation map sparsity via Transformed 1 regularization0
subMFL: Compatiple subModel Generation for Federated Learning in Device Heterogenous EnvironmentCode0
ExtremeMETA: High-speed Lightweight Image Segmentation Model by Remodeling Multi-channel Metamaterial Imagers0
Efficient Model Compression for Hierarchical Federated Learning0
NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models0
TinyM^2Net-V3: Memory-Aware Compressed Multimodal Deep Neural Networks for Sustainable Edge Deployment0
Show:102550
← PrevPage 30 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified