SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 551575 of 1356 papers

TitleStatusHype
Explainability-Driven Leaf Disease Classification Using Adversarial Training and Knowledge Distillation0
DMT: Comprehensive Distillation with Multiple Self-supervised Teachers0
Integrating Fairness and Model Pruning Through Bi-level Optimization0
Unraveling Key Factors of Knowledge Distillation0
RankDVQA-mini: Knowledge Distillation-Driven Deep Video Quality Assessment0
USM-Lite: Quantization and Sparsity Aware Fine-tuning for Speech Recognition with Universal Speech Models0
Large Multimodal Model Compression via Efficient Pruning and Distillation at AntGroupCode0
Neural Architecture Codesign for Fast Bragg Peak Analysis0
Understanding the Effect of Model Compression on Social Bias in Large Language ModelsCode0
Language Model Knowledge Distillation for Efficient Question Answering in SpanishCode0
The Efficiency Spectrum of Large Language Models: An Algorithmic SurveyCode0
Physics Inspired Criterion for Pruning-Quantization Joint LearningCode0
Privacy and Accuracy Implications of Model Complexity and Integration in Heterogeneous Federated LearningCode0
Towards Higher Ranks via Adversarial Weight Pruning0
LayerCollapse: Adaptive compression of neural networks0
Relationship between Model Compression and Adversarial Robustness: A Review of Current Evidence0
Cosine Similarity Knowledge Distillation for Individual Class Information Transfer0
Education distillation:getting student models to learn in shcools0
Knowledge Distillation Based Semantic Communications For Multiple Users0
Towards Better Parameter-Efficient Fine-Tuning for Large Language Models: A Position Paper0
Efficient Transformer Knowledge Distillation: A Performance Review0
Shedding the Bits: Pushing the Boundaries of Quantization with Minifloats on FPGAs0
Efficient Neural Networks for Tiny Machine Learning: A Comprehensive Review0
On the Impact of Calibration Data in Post-training Quantization and Pruning0
A Speed Odyssey for Deployable Quantization of LLMs0
Show:102550
← PrevPage 23 of 55Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified