SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 151160 of 1356 papers

TitleStatusHype
Activation Sparsity Opportunities for Compressing General Large Language Models0
Can Students Beyond The Teacher? Distilling Knowledge from Teacher's Bias0
Optimising TinyML with Quantization and Distillation of Transformer and Mamba Models for Indoor Localisation on Edge Devices0
Low-Rank Correction for Quantized LLMs0
Lossless Model Compression via Joint Low-Rank Factorization Optimization0
Compression for Better: A General and Stable Lossless Compression Framework0
VQ4ALL: Efficient Neural Network Representation via a Universal Codebook0
Trimming Down Large Spiking Vision Transformers via Heterogeneous Quantization Search0
CPTQuant -- A Novel Mixed Precision Post-Training Quantization Techniques for Large Language Models0
Efficient Model Compression Techniques with FishLeg0
Show:102550
← PrevPage 16 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified