SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 801850 of 1356 papers

TitleStatusHype
Match to Win: Analysing Sequences Lengths for Efficient Self-supervised Learning in Speech and Audio0
Matrix and tensor decompositions for training binary neural networks0
Maxwell's Demon at Work: Efficient Pruning by Leveraging Saturation of Neurons0
Against Membership Inference Attack: Pruning is All You Need0
MCNC: Manifold Constrained Network Compression0
26ms Inference Time for ResNet-50: Towards Real-Time Execution of all DNNs on Smartphone0
Robust Membership Encoding: Inference Attacks and Copyright Protection for Deep Learning0
Memory- and Communication-Aware Model Compression for Distributed Deep Learning Inference on IoT0
A "Network Pruning Network" Approach to Deep Model Compression0
Memory-Efficient Vision Transformers: An Activation-Aware Mixed-Rank Compression Strategy0
Memory-Friendly Scalable Super-Resolution via Rewinding Lottery Ticket Hypothesis0
An Empirical Study of Low Precision Quantization for TinyML0
Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains0
An Empirical Investigation of Matrix Factorization Methods for Pre-trained Transformers0
MICIK: MIning Cross-Layer Inherent Similarity Knowledge for Deep Model Compression0
To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference0
A Multi-objective Complex Network Pruning Framework Based on Divide-and-conquer and Global Performance Impairment Ranking0
MIMONet: Multi-Input Multi-Output On-Device Deep Learning0
MIND: Modality-Informed Knowledge Distillation Framework for Multimodal Clinical Prediction Tasks0
Minimally Invasive Surgery for Sparse Neural Networks in Contrastive Manner0
To Know Where We Are: Vision-Based Positioning in Outdoor Environments0
Mitigating Gender Bias in Distilled Language Models via Counterfactual Role Reversal0
Mix and Match: A Novel FPGA-Centric Deep Neural Network Quantization Framework0
An Embedded Deep Learning Object Detection Model For Traffic In Asian Countries0
MLKD-BERT: Multi-level Knowledge Distillation for Pre-trained Language Models0
MLPrune: Multi-Layer Pruning for Automated Neural Network Compression0
Topology Distillation for Recommender System0
An Efficient Sparse Inference Software Accelerator for Transformer-based Language Models on CPUs0
MobileAIBench: Benchmarking LLMs and LMMs for On-Device Use Cases0
Mobile Fitting Room: On-device Virtual Try-on via Diffusion Models0
An Efficient Method of Training Small Models for Regression Problems with Knowledge Distillation0
MoDeGPT: Modular Decomposition for Large Language Model Compression0
Model Adaptation for Time Constrained Embodied Control0
Model Blending for Text Classification0
Model Compression0
Model Compression and Efficient Inference for Large Language Models: A Survey0
Model compression as constrained optimization, with application to neural nets. Part II: quantization0
Model compression as constrained optimization, with application to neural nets. Part I: general framework0
Model compression as constrained optimization, with application to neural nets. Part V: combining compressions0
Scalable Model Compression by Entropy Penalized Reparameterization0
Model Compression for DNN-based Speaker Verification Using Weight Quantization0
Accelerating deep neural networks for efficient scene understanding in automotive cyber-physical systems0
Accelerating Deep Learning with Dynamic Data Pruning0
Model compression for faster structural separation of macromolecules captured by Cellular Electron Cryo-Tomography0
Model Compression for Resource-Constrained Mobile Robots0
Model Compression in Practice: Lessons Learned from Practitioners Creating On-device Machine Learning Experiences0
Model Compression Methods for YOLOv5: A Review0
torchdistill: A Modular, Configuration-Driven Framework for Knowledge Distillation0
Model compression using knowledge distillation with integrated gradients0
Model Compression Using Optimal Transport0
Show:102550
← PrevPage 17 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified