SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 12211230 of 1356 papers

TitleStatusHype
Creating Lightweight Object Detectors with Model Compression for Deployment on Edge Devices0
Croesus: Multi-Stage Processing and Transactions for Video-Analytics in Edge-Cloud Systems0
Cross-Channel Intragroup Sparsity Neural Network0
Cross Domain Model Compression by Structurally Weight Sharing0
ClusComp: A Simple Paradigm for Model Compression and Efficient Finetuning0
Effective Model Compression via Stage-wise Pruning0
CrossQuant: A Post-Training Quantization Method with Smaller Quantization Kernel for Precise Large Language Model Compression0
CSTAR: Towards Compact and STructured Deep Neural Networks with Adversarial Robustness0
CURing Large Models: Compression via CUR Decomposition0
D^2MoE: Dual Routing and Dynamic Scheduling for Efficient On-Device MoE-based LLM Serving0
Show:102550
← PrevPage 123 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified