SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 11511200 of 1356 papers

TitleStatusHype
Adversarial Fine-tuning of Compressed Neural Networks for Joint Improvement of Robustness and EfficiencyCode0
Boosting Large Language Models with Mask Fine-TuningCode0
A Contrastive Knowledge Transfer Framework for Model Compression and Transfer LearningCode0
I3D: Transformer architectures with input-dependent dynamic depth for speech recognitionCode0
Hybrid Binary Networks: Optimizing for Accuracy, Efficiency and MemoryCode0
EAQuant: Enhancing Post-Training Quantization for MoE Models via Expert-Aware OptimizationCode0
Thanos: A Block-wise Pruning Algorithm for Efficient Large Language Model CompressionCode0
Real-Time Correlation Tracking via Joint Model Compression and TransferCode0
HTR-JAND: Handwritten Text Recognition with Joint Attention Network and Knowledge DistillationCode0
Knowledge Distillation as Semiparametric InferenceCode0
Binary Classification as a Phase Separation ProcessCode0
Ultron: Enabling Temporal Geometry Compression of 3D Mesh Sequences using Temporal Correspondence and Mesh DeformationCode0
The Efficiency Spectrum of Large Language Models: An Algorithmic SurveyCode0
Knowledge Distillation for End-to-End Person SearchCode0
Towards Sparsification of Graph Neural NetworksCode0
Compact and Optimal Deep Learning with Recurrent Parameter GeneratorsCode0
StructADMM: A Systematic, High-Efficiency Framework of Structured Weight Pruning for DNNsCode0
Reinforced Knowledge Distillation for Time Series RegressionCode0
Knowledge Distillation for Singing Voice DetectionCode0
Spike encoding techniques for IoT time-varying signals benchmarked on a neuromorphic classification taskCode0
Towards Understanding and Improving Knowledge Distillation for Neural Machine TranslationCode0
Occam Gradient DescentCode0
CoDiNet: Path Distribution Modeling with Consistency and Diversity for Dynamic RoutingCode0
HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain Language Model CompressionCode0
Is Smaller Always Faster? Tradeoffs in Compressing Self-Supervised Speech TransformersCode0
RemoteTrimmer: Adaptive Structural Pruning for Remote Sensing Image ClassificationCode0
A Programmable Approach to Neural Network CompressionCode0
Beyond Perplexity: Multi-dimensional Safety Evaluation of LLM CompressionCode0
How does topology of neural architectures impact gradient propagation and model performance?Code0
Compressing Convolutional Neural Networks via Factorized Convolutional FiltersCode0
Knowledge Distillation with Reptile Meta-Learning for Pretrained Language Model CompressionCode0
Knowledge Grafting of Large Language ModelsCode0
High-fidelity 3D Model Compression based on Key SpheresCode0
Knowledge Translation: A New Pathway for Model CompressionCode0
Bayesian Tensorized Neural Networks with Automatic Rank SelectionCode0
Uncovering the Hidden Cost of Model CompressionCode0
DyCE: Dynamically Configurable Exiting for Deep Learning Compression and Real-time ScalingCode0
On-Device Neural Language Model Based Word PredictionCode0
VecQ: Minimal Loss DNN Model Compression With Vectorized Weight QuantizationCode0
SSDA: Secure Source-Free Domain AdaptationCode0
How does topology influence gradient propagation and model performance of deep networks with DenseNet-type skip connections?Code0
Language Model Knowledge Distillation for Efficient Question Answering in SpanishCode0
Universal approximation and model compression for radial neural networksCode0
Large Multimodal Model Compression via Efficient Pruning and Distillation at AntGroupCode0
Resource Constrained Model Compression via Minimax Optimization for Spiking Neural NetworksCode0
Data Efficient Stagewise Knowledge DistillationCode0
Online Ensemble Model Compression using Knowledge DistillationCode0
Distilling Universal and Joint Knowledge for Cross-Domain Model Compression on Time Series DataCode0
Compressed Object DetectionCode0
The Shallow End: Empowering Shallower Deep-Convolutional Networks through Auxiliary OutputsCode0
Show:102550
← PrevPage 24 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified