SOTAVerified

Network Pruning

Network Pruning is a popular approach to reduce a heavy network to obtain a light-weight form by removing redundancy in the heavy network. In this approach, a complex over-parameterized network is first trained, then pruned based on come criterions, and finally fine-tuned to achieve comparable performance with reduced parameters.

Source: Ensemble Knowledge Distillation for Learning Improved and Efficient Networks

Papers

Showing 176200 of 534 papers

TitleStatusHype
AutoPrune: Automatic Network Pruning by Regularizing Auxiliary ParametersCode0
L-CO-Net: Learned Condensation-Optimization Network for Clinical Parameter Estimation from Cardiac Cine MRICode0
DiffPrune: Neural Network Pruning with Deterministic Approximate Binary Gates and L_0 RegularizationCode0
Accelerating and Compressing Deep Neural Networks for Massive MIMO CSI FeedbackCode0
EDAC: Efficient Deployment of Audio Classification Models For COVID-19 DetectionCode0
Adaptive Search-and-Training for Robust and Efficient Network PruningCode0
LEAN: graph-based pruning for convolutional neural networks by extracting longest chainsCode0
Neural Network Panning: Screening the Optimal Sparse Network Before TrainingCode0
Is Complexity Required for Neural Network Pruning? A Case Study on Global Magnitude PruningCode0
Iterative Network Pruning with Uncertainty Regularization for Lifelong Sentiment ClassificationCode0
Investigating the Effect of Network Pruning on Performance and InterpretabilityCode0
Device-Wise Federated Network PruningCode0
Dep-L_0: Improving L_0-based Network Sparsification via Dependency ModelingCode0
Interpretations Steered Network Pruning via Amortized Inferred Saliency MapsCode0
Boosting Large Language Models with Mask Fine-TuningCode0
DeepSZ: A Novel Framework to Compress Deep Neural Networks by Using Error-Bounded Lossy CompressionCode0
Importance Estimation for Neural Network PruningCode0
“Learning-Compression” Algorithms for Neural Net PruningCode0
Improving Generalization in Meta-Learning via Meta-Gradient AugmentationCode0
Guiding Evolutionary AutoEncoder Training with Activation-Based Pruning OperatorsCode0
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman CodingCode0
HALO: Learning to Prune Neural Networks with ShrinkageCode0
Class-dependent Compression of Deep Neural NetworksCode0
Hierarchical Safety Realignment: Lightweight Restoration of Safety in Pruned Large Vision-Language ModelsCode0
Improving the Transferability of Adversarial Examples via Direction TuningCode0
Show:102550
← PrevPage 8 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50-2.3 GFLOPsAccuracy78.79Unverified
2ResNet50-1.5 GFLOPsAccuracy78.07Unverified
3ResNet50 2.5 GFLOPSAccuracy78Unverified
4RegX-1.6GAccuracy77.97Unverified
5ResNet50 2.0 GFLOPSAccuracy77.7Unverified
6ResNet50-3G FLOPsAccuracy77.1Unverified
7ResNet50-2G FLOPsAccuracy76.4Unverified
8ResNet50-1G FLOPsAccuracy76.38Unverified
9TAS-pruned ResNet-50Accuracy76.2Unverified
10ResNet50Accuracy75.59Unverified
#ModelMetricClaimedVerifiedStatus
1FeatherTop-1 Accuracy76.93Unverified
2SpartanTop-1 Accuracy76.17Unverified
3ST-3Top-1 Accuracy76.03Unverified
4AC/DCTop-1 Accuracy75.64Unverified
5CSTop-1 Accuracy75.5Unverified
6ProbMaskTop-1 Accuracy74.68Unverified
7STRTop-1 Accuracy74.31Unverified
8DNWTop-1 Accuracy74Unverified
9GMPTop-1 Accuracy73.91Unverified
#ModelMetricClaimedVerifiedStatus
1+U-DML*Inference Time (ms)675.56Unverified
2DenseAccuracy79Unverified
3AC/DCAccuracy78.2Unverified
4Beta-RankAccuracy74.01Unverified
5TAS-pruned ResNet-110Accuracy73.16Unverified
#ModelMetricClaimedVerifiedStatus
1TAS-pruned ResNet-110Accuracy94.33Unverified
2ShuffleNet – QuantisedInference Time (ms)23.15Unverified
3AlexNet – QuantisedInference Time (ms)5.23Unverified
4MobileNet – QuantisedInference Time (ms)4.74Unverified
#ModelMetricClaimedVerifiedStatus
1FFN-ShapleyPrunedAvg #Steps12.05Unverified