SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 9911000 of 1356 papers

TitleStatusHype
Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains0
MICIK: MIning Cross-Layer Inherent Similarity Knowledge for Deep Model Compression0
MIMONet: Multi-Input Multi-Output On-Device Deep Learning0
MIND: Modality-Informed Knowledge Distillation Framework for Multimodal Clinical Prediction Tasks0
Minimally Invasive Surgery for Sparse Neural Networks in Contrastive Manner0
Mitigating Gender Bias in Distilled Language Models via Counterfactual Role Reversal0
Mix and Match: A Novel FPGA-Centric Deep Neural Network Quantization Framework0
MLKD-BERT: Multi-level Knowledge Distillation for Pre-trained Language Models0
MLPrune: Multi-Layer Pruning for Automated Neural Network Compression0
MobileAIBench: Benchmarking LLMs and LMMs for On-Device Use Cases0
Show:102550
← PrevPage 100 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified