SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 701750 of 1356 papers

TitleStatusHype
ResSVD: Residual Compensated SVD for Large Language Model Compression0
Retraining-Based Iterative Weight Quantization for Deep Neural Networks0
Retrieval-based Knowledge Transfer: An Effective Approach for Extreme Large Language Model Compression0
Reverse-engineering recurrent neural network solutions to a hierarchical inference task for mice0
Reversible Watermarking in Deep Convolutional Neural Networks for Integrity Authentication0
Revisiting Data Augmentation in Model Compression: An Empirical and Comprehensive Study0
Revisiting Self-Distillation0
Reweighted Solutions for Weighted Low Rank Approximation0
Riemannian Low-Rank Model Compression for Federated Learning with Over-the-Air Aggregation0
RingMoE: Mixture-of-Modality-Experts Multi-Modal Foundation Models for Universal Remote Sensing Image Interpretation0
RLRC: Reinforcement Learning-based Recovery for Compressed Vision-Language-Action Models0
Robot Intent Recognition Method Based on State Grid Business Office0
Robustness-Guided Image Synthesis for Data-Free Quantization0
Robustness in Compressed Neural Networks for Object Detection0
Robust testing of low-dimensional functions0
Role of Mixup in Topological Persistence Based Knowledge Distillation for Wearable Sensor Data0
Runtime Tunable Tsetlin Machines for Edge Inference on eFPGAs0
SaleNet: A low-power end-to-end CNN accelerator for sustained attention level evaluation using EEG0
Saten: Sparse Augmented Tensor Networks for Post-Training Compression of Large Language Models0
Scalable Teacher Forcing Network for Semi-Supervised Large Scale Data Streams0
Scaling Laws for Deep Learning0
SCSP: Spectral Clustering Filter Pruning with Soft Self-adaption Manners0
SDQ: Sparse Decomposed Quantization for LLM Inference0
Search for Better Students to Learn Distilled Knowledge0
SeKron: A Decomposition Method Supporting Many Factorization Structures0
Selective Convolutional Units: Improving CNNs via Channel Selectivity0
Self-calibration for Language Model Quantization and Pruning0
Self-Supervised Generative Adversarial Compression0
Efficient Personalized Speech Enhancement through Self-Supervised Learning0
Semantic Retention and Extreme Compression in LLMs: Can We Have Both?0
Semantics Prompting Data-Free Quantization for Low-Bit Vision Transformers0
SEOFP-NET: Compression and Acceleration of Deep Neural Networks for Speech Enhancement Using Sign-Exponent-Only Floating-Points0
Sequence-Level Knowledge Distillation for Model Compression of Attention-based Sequence-to-Sequence Speech Recognition0
Encoding Weights of Irregular Sparsity for Fixed-to-Fixed Model Compression0
Serving and Optimizing Machine Learning Workflows on Heterogeneous Infrastructures0
SGAD: Soft-Guided Adaptively-Dropped Neural Network0
SHARK: A Lightweight Model Compression Approach for Large-scale Recommender Systems0
Shortcut-V2V: Compression Framework for Video-to-Video Translation based on Temporal Redundancy Reduction0
Shrinking Bigfoot: Reducing wav2vec 2.0 footprint0
ShrinkML: End-to-End ASR Model Compression Using Reinforcement Learning0
Simplifying Two-Stage Detectors for On-Device Inference in Remote Sensing0
Masked Training of Neural Networks with Partial Gradients0
Small, Accurate, and Fast Vehicle Re-ID on the Edge: the SAFR Approach0
Small Language Models: Architectures, Techniques, Evaluation, Problems and Future Adaptation0
Small Object Detection Based on Modified FSSD and Model Compression0
Smart Environmental Monitoring of Marine Pollution using Edge AI0
SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation0
Smooth Model Compression without Fine-Tuning0
CrAFT: Compression-Aware Fine-Tuning for Efficient Visual Task Adaptation0
Soft Labeling Affects Out-of-Distribution Detection of Deep Neural Networks0
Show:102550
← PrevPage 15 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified