SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 851900 of 1356 papers

TitleStatusHype
Semi-Online Knowledge DistillationCode0
Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration0
Local-Selective Feature Distillation for Single Image Super-Resolution0
Structured Pruning Learns Compact and Accurate Models0
Weight Squeezing: Reparameterization for Knowledge Transfer and Model Compression0
Learning-Based Symbol Level Precoding: A Memory-Efficient Unsupervised Learning Approach0
Domain Generalization on Efficient Acoustic Scene Classification using Residual Normalization0
Learning Interpretation with Explainable Knowledge Distillation0
SEOFP-NET: Compression and Acceleration of Deep Neural Networks for Speech Enhancement Using Sign-Exponent-Only Floating-Points0
A Survey on Green Deep Learning0
Oracle Teacher: Leveraging Target Information for Better Knowledge Distillation of CTC Models0
Weight, Block or Unit? Exploring Sparsity Tradeoffs for Speech Enhancement on Tiny Neural Accelerators0
How to Select One Among All ? An Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language Understanding0
ILMPQ : An Intra-Layer Multi-Precision Deep Neural Network Quantization framework for FPGA0
On Cross-Layer Alignment for Model Fusion of Heterogeneous Neural Networks0
Reconstructing Pruned Filters using Cheap Spatial Transformations0
Exploring Gradient Flow Based Saliency for DNN Model CompressionCode0
How and When Adversarial Robustness Transfers in Knowledge Distillation?0
Analysis of memory consumption by neural networks based on hyperparameters0
Augmenting Knowledge Distillation With Peer-To-Peer Mutual Learning For Model Compression0
Accelerating Framework of Transformer by Hardware Design and Model Compression Co-Optimization0
HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain Language Model CompressionCode0
A Short Study on Compressing Decoder-Based Language Models0
Robustness Challenges in Model Distillation and Pruning for Natural Language Understanding0
Pro-KD: Progressive Distillation by Following the Footsteps of the Teacher0
Kronecker Decomposition for GPT Compression0
Differentiable Network Pruning for Microcontrollers0
A Memory-Efficient Learning Framework for SymbolLevel Precoding with Quantized NN Weights0
Rectifying the Data Bias in Knowledge Distillation0
FedDQ: Communication-Efficient Federated Learning with Descending Quantization0
KIMERA: Injecting Domain Knowledge into Vacant Transformer Heads0
A Unified Knowledge Distillation Framework for Deep Directed Graphical Models0
Sparse Unbalanced GAN Training with In-Time Over-Parameterization0
HFSP: A Hardware-friendly Soft Pruning Framework for Vision Transformers0
Model Compression via Symmetries of the Parameter Space0
Learning Efficient Image Super-Resolution Networks via Structure-Regularized Pruning0
Robot Intent Recognition Method Based on State Grid Business Office0
Prototypical Contrastive Predictive Coding0
Bayesian Optimization with Clustering and Rollback for CNN Auto PruningCode0
Classification-based Quality Estimation: Small and Efficient Models for Real-world Applications0
Experimental implementation of a neural network optical channel equalizer in restricted hardware using pruning and quantization0
A Note on Knowledge Distillation Loss Function for Object Classification0
Multihop: Leveraging Complex Models to Learn Accurate Simple Models0
KroneckerBERT: Learning Kronecker Decomposition for Pre-trained Language Models via Knowledge Distillation0
Causal Explanation of Convolutional Neural NetworksCode0
BioNetExplorer: Architecture-Space Exploration of Bio-Signal Processing Deep Neural Networks for Wearables0
GDP: Stabilized Neural Network Pruning via Gates with Differentiable Polarization0
Full-Cycle Energy Consumption Benchmark for Low-Carbon Computer Vision0
Lipschitz Continuity Guided Knowledge Distillation0
DKM: Differentiable K-Means Clustering Layer for Neural Network Compression0
Show:102550
← PrevPage 18 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified