SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 451500 of 1356 papers

TitleStatusHype
Training dynamic models using early exits for automatic speech recognition on resource-constrained devicesCode0
Pruning Large Language Models via Accuracy Predictor0
Two-Step Knowledge Distillation for Tiny Speech Enhancement0
CoLLD: Contrastive Layer-to-layer Distillation for Compressing Multilingual Pre-trained Speech Encoders0
Training Acceleration of Low-Rank Decomposed Networks using Sequential Freezing and Rank Quantization0
Norm Tweaking: High-performance Low-bit Quantization of Large Language Models0
Compressing Vision Transformers for Low-Resource Visual LearningCode0
ADC/DAC-Free Analog Acceleration of Deep Neural Networks with Frequency Transformation0
Uncovering the Hidden Cost of Model CompressionCode0
Computation-efficient Deep Learning for Computer Vision: A Survey0
Improving Knowledge Distillation for BERT Models: Loss Functions, Mapping Methods, and Weight Tuning0
OmniQuant: Omnidirectionally Calibrated Quantization for Large Language ModelsCode2
DLIP: Distilling Language-Image Pre-training0
QD-BEV : Quantization-aware View-guided Distillation for Multi-view 3D Object Detection0
Learning Disentangled Representation with Mutual Information Maximization for Real-Time UAV Tracking0
An Empirical Study of CLIP for Text-based Person SearchCode1
SHARK: A Lightweight Model Compression Approach for Large-scale Recommender Systems0
Diffusion Models for Image Restoration and Enhancement -- A Comprehensive SurveyCode2
Spike-and-slab shrinkage priors for structurally sparse Bayesian neural networks0
Benchmarking Adversarial Robustness of Compressed Deep Learning Models0
Shortcut-V2V: Compression Framework for Video-to-Video Translation based on Temporal Redundancy Reduction0
A Survey on Model Compression for Large Language Models0
FedEdge AI-TC: A Semi-supervised Traffic Classification Method based on Trusted Federated Deep Learning for Mobile Edge Computing0
Resource Constrained Model Compression via Minimax Optimization for Spiking Neural NetworksCode0
Accurate Retraining-free Pruning for Pretrained Encoder-based Language ModelsCode1
Accurate Neural Network Pruning Requires Rethinking Sparse Optimization0
MIMONet: Multi-Input Multi-Output On-Device Deep Learning0
Model Compression Methods for YOLOv5: A Review0
Impact of Disentanglement on Pruning Neural Networks0
Knowledge Distillation for Object Detection: from generic to remote sensing datasets0
CA-LoRA: Adapting Existing LoRA for Compressed LLMs to Enable Efficient Multi-Tasking on Personal DevicesCode0
Distilled Pruning: Using Synthetic Data to Win the LotteryCode0
Distilling Universal and Joint Knowledge for Cross-Domain Model Compression on Time Series DataCode0
TensorGPT: Efficient Compression of Large Language Models based on Tensor-Train Decomposition0
Data-Free Quantization via Mixed-Precision Compensation without Fine-Tuning0
Quantization Variation: A New Perspective on Training Transformers with Low-Bit PrecisionCode1
An Efficient Sparse Inference Software Accelerator for Transformer-based Language Models on CPUs0
Constraint-aware and Ranking-distilled Token Pruning for Efficient Transformer InferenceCode1
Feature Adversarial Distillation for Point Cloud Classification0
Low-Rank Prune-And-Factorize for Language Model Compression0
Partitioning-Guided K-Means: Extreme Empty Cluster Resolution for Extreme Model Compression0
Data-Free Backbone Fine-Tuning for Pruned Neural NetworksCode0
LoSparse: Structured Compression of Large Language Models based on Low-Rank and Sparse Approximation0
DynaQuant: Compressing Deep Learning Training Checkpoints via Dynamic Quantization0
CrossKD: Cross-Head Knowledge Distillation for Object DetectionCode1
HiNeRV: Video Compression with Hierarchical Encoding-based Neural RepresentationCode1
Neural Network Compression using Binarization and Few Full-Precision Weights0
Efficient and Robust Quantization-aware Training via Adaptive Coreset SelectionCode1
Deep Model Compression Also Helps Models Capture AmbiguityCode0
A Brief Review of Hypernetworks in Deep LearningCode0
Show:102550
← PrevPage 10 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified