SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 526550 of 1356 papers

TitleStatusHype
Effective Multi-Stage Training Model For Edge Computing Devices In Intrusion Detection0
Characterizing the Accuracy -- Efficiency Trade-off of Low-rank Decomposition in Language Models0
Adaptive Learning of Tensor Network Structures0
Effective Interplay between Sparsity and Quantization: From Theory to Practice0
Effective and Efficient One-pass Compression of Speech Foundation Models Using Sparsity-aware Self-pinching Gates0
Effective and Efficient Mixed Precision Quantization of Speech Foundation Models0
Education distillation:getting student models to learn in shcools0
Channel Compression: Rethinking Information Redundancy among Channels in CNN Architecture0
Edge-Optimized Deep Learning & Pattern Recognition Techniques for Non-Intrusive Load Monitoring of Energy Time Series0
Edge-MultiAI: Multi-Tenancy of Latency-Sensitive Deep Learning Applications on Edge0
Edge-First Language Model Inference: Models, Metrics, and Tradeoffs0
Edge Deep Learning for Neural Implants0
Order of Compression: A Systematic and Optimal Sequence to Combinationally Compress CNN0
An Improving Framework of regularization for Network Compression0
Adaptive Quantization of Neural Networks0
Accelerating Framework of Transformer by Hardware Design and Model Compression Co-Optimization0
Edge-AI for Agriculture: Lightweight Vision Models for Disease Detection in Resource-Limited Settings0
Edge AI: Evaluation of Model Compression Techniques for Convolutional Neural Networks0
EDCompress: Energy-Aware Model Compression for Dataflows0
ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models0
Cascaded channel pruning using hierarchical self-distillation0
DynaQuant: Compressing Deep Learning Training Checkpoints via Dynamic Quantization0
Dynamic Sparse Learning: A Novel Paradigm for Efficient Recommendation0
Can We Find Strong Lottery Tickets in Generative Models?0
A New Clustering-Based Technique for the Acceleration of Deep Convolutional Networks0
Show:102550
← PrevPage 22 of 55Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified