SOTAVerified

Network Pruning

Network Pruning is a popular approach to reduce a heavy network to obtain a light-weight form by removing redundancy in the heavy network. In this approach, a complex over-parameterized network is first trained, then pruned based on come criterions, and finally fine-tuned to achieve comparable performance with reduced parameters.

Source: Ensemble Knowledge Distillation for Learning Improved and Efficient Networks

Papers

Showing 126150 of 534 papers

TitleStatusHype
Comprehensive Study on Performance Evaluation and Optimization of Model Compression: Bridging Traditional Deep Learning and Large Language Models0
Cogradient Descent for Bilinear Optimization0
Coarse and fine-grained automatic cropping deep convolutional neural network0
Pruning coupled with learning, ensembles of minimal neural networks, and future of XAI0
CLIP-Q: Deep Network Compression Learning by In-Parallel Pruning-Quantization0
NPAS: A Compiler-aware Framework of Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile Acceleration0
Efficient Multi-Object Tracking on Edge Devices via Reconstruction-Based Channel Pruning0
Enabling Image Recognition on Constrained Devices Using Neural Network Pruning and a CycleGAN0
Ensemble Mask Networks0
Channel-wise pruning of neural networks with tapering resource constraint0
Channel Planting for Deep Neural Networks using Knowledge Distillation0
A relativistic extension of Hopfield neural networks via the mechanical analogy0
Architecture-aware Network Pruning for Vision Quality Applications0
Efficient Ensembles of Graph Neural Networks0
Certified Invertibility in Neural Networks via Mixed-Integer Programming0
AP: Selective Activation for De-sparsifying Pruned Neural Networks0
Accurate Neural Network Pruning Requires Rethinking Sparse Optimization0
Cascade Weight Shedding in Deep Neural Networks: Benefits and Pitfalls for Network Pruning0
CWP: Instance complexity weighted channel-wise soft masks for network pruning0
Effective Subset Selection Through The Lens of Neural Network Pruning0
CAP: Context-Aware Pruning for Semantic-Segmentation0
CAP-Context-Aware-Pruning-for-Semantic-Segmentation0
A Probabilistic Approach to Neural Network Pruning0
Can We Find Strong Lottery Tickets in Generative Models?0
ADMP: An Adversarial Double Masks Based Pruning Framework For Unsupervised Cross-Domain Compression0
Show:102550
← PrevPage 6 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50-2.3 GFLOPsAccuracy78.79Unverified
2ResNet50-1.5 GFLOPsAccuracy78.07Unverified
3ResNet50 2.5 GFLOPSAccuracy78Unverified
4RegX-1.6GAccuracy77.97Unverified
5ResNet50 2.0 GFLOPSAccuracy77.7Unverified
6ResNet50-3G FLOPsAccuracy77.1Unverified
7ResNet50-2G FLOPsAccuracy76.4Unverified
8ResNet50-1G FLOPsAccuracy76.38Unverified
9TAS-pruned ResNet-50Accuracy76.2Unverified
10ResNet50Accuracy75.59Unverified
#ModelMetricClaimedVerifiedStatus
1FeatherTop-1 Accuracy76.93Unverified
2SpartanTop-1 Accuracy76.17Unverified
3ST-3Top-1 Accuracy76.03Unverified
4AC/DCTop-1 Accuracy75.64Unverified
5CSTop-1 Accuracy75.5Unverified
6ProbMaskTop-1 Accuracy74.68Unverified
7STRTop-1 Accuracy74.31Unverified
8DNWTop-1 Accuracy74Unverified
9GMPTop-1 Accuracy73.91Unverified
#ModelMetricClaimedVerifiedStatus
1+U-DML*Inference Time (ms)675.56Unverified
2DenseAccuracy79Unverified
3AC/DCAccuracy78.2Unverified
4Beta-RankAccuracy74.01Unverified
5TAS-pruned ResNet-110Accuracy73.16Unverified
#ModelMetricClaimedVerifiedStatus
1TAS-pruned ResNet-110Accuracy94.33Unverified
2ShuffleNet – QuantisedInference Time (ms)23.15Unverified
3AlexNet – QuantisedInference Time (ms)5.23Unverified
4MobileNet – QuantisedInference Time (ms)4.74Unverified
#ModelMetricClaimedVerifiedStatus
1FFN-ShapleyPrunedAvg #Steps12.05Unverified