SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 951960 of 1356 papers

TitleStatusHype
OPTISHEAR: Towards Efficient and Adaptive Pruning of Large Language Models via Evolutionary Optimization0
Oracle Teacher: Leveraging Target Information for Better Knowledge Distillation of CTC Models0
A Memory-Efficient Learning Framework for SymbolLevel Precoding with Quantized NN Weights0
OTOV2: Automatic, Generic, User-Friendly0
Outsourcing Training without Uploading Data via Efficient Collaborative Open-Source Sampling0
Towards Higher Ranks via Adversarial Weight Pruning0
Pacemaker: Intermediate Teacher Knowledge Distillation For On-The-Fly Convolutional Neural Network0
Pangu Light: Weight Re-Initialization for Pruning and Accelerating LLMs0
Parameter Compression of Recurrent Neural Networks and Degradation of Short-term Memory0
AMD: Automatic Multi-step Distillation of Large-scale Vision Models0
Show:102550
← PrevPage 96 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified