SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 101150 of 1356 papers

TitleStatusHype
LiteYOLO-ID: A Lightweight Object Detection Network for Insulator Defect DetectionCode1
An Empirical Study of CLIP for Text-based Person SearchCode1
Global Sparse Momentum SGD for Pruning Very Deep Neural NetworksCode1
Densely Guided Knowledge Distillation using Multiple Teacher AssistantsCode1
Differentiable Model Compression via Pseudo Quantization NoiseCode1
Distilled Split Deep Neural Networks for Edge-Assisted Real-Time SystemsCode1
HiNeRV: Video Compression with Hierarchical Encoding-based Neural RepresentationCode1
An Information Theory-inspired Strategy for Automatic Network PruningCode1
DiSparse: Disentangled Sparsification for Multitask Model CompressionCode1
Discrimination-aware Network Pruning for Deep Model CompressionCode1
DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and QuantizationCode1
Distilling Linguistic Context for Language Model CompressionCode1
Distilling Object Detectors with Feature RichnessCode1
DUET: A Tuning-Free Device-Cloud Collaborative Parameters Generation Framework for Efficient Device Model GeneralizationCode1
DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and TransformersCode1
Dual Relation Knowledge Distillation for Object DetectionCode1
Dynamic Channel Pruning: Feature Boosting and SuppressionCode1
Forget the Data and Fine-Tuning! Just Fold the Network to CompressCode1
3DG-STFM: 3D Geometric Guided Student-Teacher Feature MatchingCode1
Dynamic Slimmable NetworkCode1
AD-KD: Attribution-Driven Knowledge Distillation for Language Model CompressionCode1
A Real-time Low-cost Artificial Intelligence System for Autonomous Spraying in Palm PlantationsCode1
FIMA-Q: Post-Training Quantization for Vision Transformers by Fisher Information Matrix ApproximationCode1
Neural Pruning via Growing RegularizationCode1
Gaussian RAM: Lightweight Image Classification via Stochastic Retina-Inspired Glimpse and Reinforcement LearningCode1
Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and BetterCode1
FAT: Learning Low-Bitwidth Parametric Representation via Frequency-Aware TransformationCode1
Faster and Lighter LLMs: A Survey on Current Challenges and Way ForwardCode1
FedUKD: Federated UNet Model with Knowledge Distillation for Land Use Classification from Satellite and Street ViewsCode1
General Instance Distillation for Object DetectionCode1
BERT-EMD: Many-to-Many Layer Mapping for BERT Compression with Earth Mover's DistanceCode1
Basis Sharing: Cross-Layer Parameter Sharing for Large Language Model CompressionCode1
Environmental Sound Classification on the Edge: A Pipeline for Deep Acoustic Networks on Extremely Resource-Constrained DevicesCode1
Backdoor Attacks on Federated Learning with Lottery Ticket HypothesisCode1
Basic Binary Convolution Unit for Binarized Image Restoration NetworkCode1
Bidirectional Distillation for Top-K Recommender SystemCode1
Bit-mask Robust Contrastive Knowledge Distillation for Unsupervised Semantic HashingCode1
Fast Vocabulary Transfer for Language Model CompressionCode1
FFNeRV: Flow-Guided Frame-Wise Neural Representations for VideosCode1
A Survey on Dynamic Neural Networks: from Computer Vision to Multi-modal Sensor FusionCode1
Finding the Task-Optimal Low-Bit Sub-Distribution in Deep Neural NetworksCode1
BERT-of-Theseus: Compressing BERT by Progressive Module ReplacingCode1
CHEX: CHannel EXploration for CNN Model CompressionCode1
Class Attention Transfer Based Knowledge DistillationCode1
Model LEGO: Creating Models Like Disassembling and Assembling Building BlocksCode1
CoA: Towards Real Image Dehazing via Compression-and-AdaptationCode1
Communication-Computation Trade-Off in Resource-Constrained Edge InferenceCode1
Communication-Efficient Diffusion Strategy for Performance Improvement of Federated Learning with Non-IID DataCode1
EvoPress: Towards Optimal Dynamic Model Compression via Evolutionary SearchCode1
Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product OperatorsCode1
Show:102550
← PrevPage 3 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified