SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 401450 of 1356 papers

TitleStatusHype
USM-Lite: Quantization and Sparsity Aware Fine-tuning for Speech Recognition with Universal Speech Models0
Rethinking Compression: Reduced Order Modelling of Latent Features in Large Language ModelsCode1
Neural Architecture Codesign for Fast Bragg Peak Analysis0
Large Multimodal Model Compression via Efficient Pruning and Distillation at AntGroupCode0
Understanding the Effect of Model Compression on Social Bias in Large Language ModelsCode0
Language Model Knowledge Distillation for Efficient Question Answering in SpanishCode0
Physics Inspired Criterion for Pruning-Quantization Joint LearningCode0
The Efficiency Spectrum of Large Language Models: An Algorithmic SurveyCode0
LayerCollapse: Adaptive compression of neural networks0
Privacy and Accuracy Implications of Model Complexity and Integration in Heterogeneous Federated LearningCode0
Towards Higher Ranks via Adversarial Weight PruningCode0
Relationship between Model Compression and Adversarial Robustness: A Review of Current Evidence0
Cosine Similarity Knowledge Distillation for Individual Class Information Transfer0
Knowledge Distillation Based Semantic Communications For Multiple Users0
Education distillation:getting student models to learn in shcools0
Efficient Transformer Knowledge Distillation: A Performance Review0
Compact 3D Gaussian Representation for Radiance FieldCode2
Towards Better Parameter-Efficient Fine-Tuning for Large Language Models: A Position Paper0
Shedding the Bits: Pushing the Boundaries of Quantization with Minifloats on FPGAs0
LQ-LoRA: Low-rank Plus Quantized Matrix Decomposition for Efficient Language Model FinetuningCode1
Efficient Neural Networks for Tiny Machine Learning: A Comprehensive Review0
On the Impact of Calibration Data in Post-training Quantization and Pruning0
A Speed Odyssey for Deployable Quantization of LLMs0
FedCode: Communication-Efficient Federated Learning via Transferring Codebooks0
EPIM: Efficient Processing-In-Memory Accelerators based on Epitome0
What is Lost in Knowledge Distillation?0
Supervised domain adaptation for building extraction from off-nadir aerial images0
Asymmetric Masked Distillation for Pre-Training Small Foundation ModelsCode0
Data-Free Distillation of Language Model by Text-to-Text Transfer0
Divergent Token Metrics: Measuring degradation to prune away LLM components -- and optimize quantization0
Retrieval-based Knowledge Transfer: An Effective Approach for Extreme Large Language Model Compression0
LXMERT Model Compression for Visual Question AnsweringCode0
Data-Free Knowledge Distillation Using Adversarially Perturbed OpenGL Shader Images0
In defense of parameter sharing for model-compression0
USDC: Unified Static and Dynamic Compression for Visual Transformer0
Efficient Apple Maturity and Damage Assessment: A Lightweight Detection Model with GAN and Attention Mechanism0
What do larger image classifiers memorise?0
Accelerating Machine Learning Primitives on Commodity Hardware0
A Corrected Expected Improvement Acquisition Function Under Noisy ObservationsCode0
Model Compression in Practice: Lessons Learned from Practitioners Creating On-device Machine Learning Experiences0
Robustness-Guided Image Synthesis for Data-Free Quantization0
Sparse Deep Learning for Time Series Data: Theory and Applications0
ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models0
Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation0
Artemis: HE-Aware Training for Efficient Privacy-Preserving Machine Learning0
Bridging the Gap Between Foundation Models and Heterogeneous Federated Learning0
Distilling Inductive Bias: Knowledge Distillation Beyond Model Compression0
CAIT: Triple-Win Compression towards High Accuracy, Fast Inference, and Favorable Transferability For ViTs0
On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''0
VIC-KD: Variance-Invariance-Covariance Knowledge Distillation to Make Keyword Spotting More Robust Against Adversarial Attacks0
Show:102550
← PrevPage 9 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified