SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 801850 of 1356 papers

TitleStatusHype
The Effect of Model Compression on Fairness in Facial Expression Recognition0
The Impact of Quantization and Pruning on Deep Reinforcement Learning Models0
The Knowledge Within: Methods for Data-Free Model Compression0
The Lottery LLM Hypothesis, Rethinking What Abilities Should LLM Compression Preserve?0
Theoretical Guarantees for Low-Rank Compression of Deep Neural Networks0
The Potential of AutoML for Recommender Systems0
Three Dimensional Convolutional Neural Network Pruning with Regularization-Based Method0
Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation0
Time-Correlated Sparsification for Efficient Over-the-Air Model Aggregation in Wireless Federated Learning0
Tiny but Accurate: A Pruned, Quantized and Optimized Memristor Crossbar Framework for Ultra Efficient DNN Implementation0
TinyM^2Net-V3: Memory-Aware Compressed Multimodal Deep Neural Networks for Sustainable Edge Deployment0
TinyR1-32B-Preview: Boosting Accuracy with Branch-Merge Distillation0
To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference0
To Know Where We Are: Vision-Based Positioning in Outdoor Environments0
Topology Distillation for Recommender System0
torchdistill: A Modular, Configuration-Driven Framework for Knowledge Distillation0
Toward Extremely Low Bit and Lossless Accuracy in DNNs with Progressive ADMM0
Toward Real-World Voice Disorder Classification0
Towards Accurate Post-Training Quantization for Vision Transformer0
Towards a tailored mixed-precision sub-8-bit quantization scheme for Gated Recurrent Units using Genetic Algorithms0
Towards Better Parameter-Efficient Fine-Tuning for Large Language Models: A Position Paper0
Towards Building a Real Time Mobile Device Bird Counting System Through Synthetic Data Training and Model Compression0
Towards domain generalisation in ASR with elitist sampling and ensemble knowledge distillation0
Towards efficient deep autoencoders for multivariate time series anomaly detection0
Towards Efficient Deep Spiking Neural Networks Construction with Spiking Activity based Pruning0
Towards Efficient Full 8-bit Integer DNN Online Training on Resource-limited Devices without Batch Normalization0
Towards Efficient Tensor Decomposition-Based DNN Model Compression with Optimization Framework0
Towards Higher Ranks via Adversarial Weight Pruning0
Towards Modality Transferable Visual Information Representation with Optimal Model Compression0
Towards Optimal Compression: Joint Pruning and Quantization0
Towards Superior Quantization Accuracy: A Layer-sensitive Approach0
Do we need Label Regularization to Fine-tune Pre-trained Language Models?0
Towards Zero-Shot Knowledge Distillation for Natural Language Processing0
Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models0
Training Acceleration of Low-Rank Decomposed Networks using Sequential Freezing and Rank Quantization0
T-RECX: Tiny-Resource Efficient Convolutional neural networks with early-eXit0
TrimLLM: Progressive Layer Dropping for Domain-Specific LLMs0
Trimming Down Large Spiking Vision Transformers via Heterogeneous Quantization Search0
Triple Sparsification of Graph Convolutional Networks without Sacrificing the Accuracy0
Tuning Algorithms and Generators for Efficient Edge Inference0
TutorNet: Towards Flexible Knowledge Distillation for End-to-End Speech Recognition0
TwinDNN: A Tale of Two Deep Neural Networks0
Two-Bit Networks for Deep Learning on Resource-Constrained Embedded Devices0
Two is Better than One: Efficient Ensemble Defense for Robust and Compact Models0
Two-Pass End-to-End ASR Model Compression0
Two-Step Knowledge Distillation for Tiny Speech Enhancement0
UDC: Unified DNAS for Compressible TinyML Models0
Understanding and Improving Knowledge Distillation0
Understanding LLMs: A Comprehensive Overview from Training to Inference0
Unleashing Channel Potential: Space-Frequency Selection Convolution for SAR Object Detection0
Show:102550
← PrevPage 17 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified