SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 13011350 of 1356 papers

TitleStatusHype
Multi-Task Zipping via Layer-wise Neuron Sharing0
DEEPEYE: A Compact and Accurate Video Comprehension at Terminal Devices Compressed with Quantization and Tensorization0
Precise Box Score: Extract More Information from Datasets to Improve the Performance of Face Detection0
Developing Far-Field Speaker System Via Teacher-Student Learning0
Hybrid Binary Networks: Optimizing for Accuracy, Efficiency and MemoryCode0
Efficient Recurrent Neural Networks using Structured Matrices in FPGAs0
Interpreting Deep Classifier by Visual Distillation of Dark Knowledge0
Model compression via distillation and quantizationCode0
Paraphrasing Complex Network: Network Compression via Factor TransferCode0
Model compression for faster structural separation of macromolecules captured by Cellular Electron Cryo-Tomography0
Don't encrypt the data; just approximate the model \ Towards Secure Transaction and Fair Pricing of Training Data0
DNN Model Compression Under Accuracy Constraints0
Adaptive Quantization of Neural Networks0
Learning Deep and Compact Models for Gesture RecognitionCode0
StrassenNets: Deep Learning with a Multiplication BudgetCode0
Neural Regularized Domain Adaptation for Chinese Word Segmentation0
Learning Efficient Object Detection Models with Knowledge Distillation0
MicroExpNet: An Extremely Small and Fast Model For Expression Recognition From Face ImagesCode0
Improved Bayesian Compression0
Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy0
Weightless: Lossy Weight Encoding For Deep Neural Network CompressionCode0
A Survey of Model Compression and Acceleration for Deep Neural Networks0
Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks0
N2N Learning: Network to Network Compression via Policy Gradient Reinforcement Learning0
Learning Intrinsic Sparse Structures within Long Short-Term MemoryCode0
A Deep Cascade Network for Unaligned Face Attribute Classification0
Model Distillation with Knowledge Transfer from Face Classification to Alignment and Verification0
DeepRebirth: Accelerating Deep Neural Network Execution on Mobile Devices0
Model compression as constrained optimization, with application to neural nets. Part II: quantization0
Model compression as constrained optimization, with application to neural nets. Part I: general framework0
DarkRank: Accelerating Deep Metric Learning via Cross Sample Similarities Transfer0
Tensor Contraction Layers for Parsimonious Deep Nets0
Cross-lingual Distillation for Text ClassificationCode0
Acoustic Model Compression with MAP adaptation0
Exploiting random projections and sparsity with random forests and gradient boosting methods -- Application to multi-label and multi-output learning, random forest model compression and leveraging input sparsity0
A Compact DNN: Approaching GoogLeNet-Level Accuracy of Classification and Domain Adaptation0
Exploiting Domain Knowledge via Grouped Weight Sharing with Application to Text Categorization0
Compression of Deep Neural Networks for Image Instance Retrieval0
QuickNet: Maximizing Efficiency and Efficacy in Deep Architectures0
Two-Bit Networks for Deep Learning on Resource-Constrained Embedded Devices0
Parameter Compression of Recurrent Neural Networks and Degradation of Short-term Memory0
The Shallow End: Empowering Shallower Deep-Convolutional Networks through Auxiliary OutputsCode0
Deep Model Compression: Distilling Knowledge from Noisy Teachers0
Ensemble-Compression: A New Method for Parallel Training of Deep Neural Networks0
Adapting Models to Signal Degradation using Distillation0
On the Compression of Recurrent Neural Networks with an Application to LVCSR acoustic modeling for Embedded Speech Recognition0
Actor-Mimic: Deep Multitask and Transfer Reinforcement LearningCode0
Blending LSTMs into CNNs0
Distilling Model KnowledgeCode0
DeepFont: Identify Your Font from An ImageCode0
Show:102550
← PrevPage 27 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified