SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 831840 of 1356 papers

TitleStatusHype
Towards Superior Quantization Accuracy: A Layer-sensitive Approach0
Do we need Label Regularization to Fine-tune Pre-trained Language Models?0
Towards Zero-Shot Knowledge Distillation for Natural Language Processing0
Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models0
Training Acceleration of Low-Rank Decomposed Networks using Sequential Freezing and Rank Quantization0
T-RECX: Tiny-Resource Efficient Convolutional neural networks with early-eXit0
TrimLLM: Progressive Layer Dropping for Domain-Specific LLMs0
Trimming Down Large Spiking Vision Transformers via Heterogeneous Quantization Search0
Triple Sparsification of Graph Convolutional Networks without Sacrificing the Accuracy0
Tuning Algorithms and Generators for Efficient Edge Inference0
Show:102550
← PrevPage 84 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified