SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 251275 of 1356 papers

TitleStatusHype
Comprehensive Survey of Model Compression and Speed up for Vision Transformers0
Are We There Yet? A Measurement Study of Efficiency for LLM Applications on Mobile Devices0
Compressed models are NOT miniature versions of large models0
Artemis: HE-Aware Training for Efficient Privacy-Preserving Machine Learning0
A Novel Architecture Slimming Method for Network Pruning and Knowledge Distillation0
Adaptive Learning of Tensor Network Structures0
Characterizing the Accuracy -- Efficiency Trade-off of Low-rank Decomposition in Language Models0
Accelerating Framework of Transformer by Hardware Design and Model Compression Co-Optimization0
CSTAR: Towards Compact and STructured Deep Neural Networks with Adversarial Robustness0
Channel Compression: Rethinking Information Redundancy among Channels in CNN Architecture0
An Improving Framework of regularization for Network Compression0
Order of Compression: A Systematic and Optimal Sequence to Combinationally Compress CNN0
Adaptive Quantization of Neural Networks0
CrossQuant: A Post-Training Quantization Method with Smaller Quantization Kernel for Precise Large Language Model Compression0
CURing Large Models: Compression via CUR Decomposition0
Accelerating deep neural networks for efficient scene understanding in automotive cyber-physical systems0
Adaptive Neural Connections for Sparsity Learning0
Croesus: Multi-Stage Processing and Transactions for Video-Analytics in Edge-Cloud Systems0
Cascaded channel pruning using hierarchical self-distillation0
Can We Find Strong Lottery Tickets in Generative Models?0
A New Clustering-Based Technique for the Acceleration of Deep Convolutional Networks0
Cross-Channel Intragroup Sparsity Neural Network0
Can Students Outperform Teachers in Knowledge Distillation based Model Compression?0
Can Students Beyond The Teacher? Distilling Knowledge from Teacher's Bias0
A "Network Pruning Network" Approach to Deep Model Compression0
Show:102550
← PrevPage 11 of 55Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified