SOTAVerified

Network Pruning

Network Pruning is a popular approach to reduce a heavy network to obtain a light-weight form by removing redundancy in the heavy network. In this approach, a complex over-parameterized network is first trained, then pruned based on come criterions, and finally fine-tuned to achieve comparable performance with reduced parameters.

Source: Ensemble Knowledge Distillation for Learning Improved and Efficient Networks

Papers

Showing 351375 of 534 papers

TitleStatusHype
The Incredible Shrinking Neural Network: New Perspectives on Learning Representations Through The Lens of Pruning0
Pruning Before Training May Improve Generalization, Provably0
Three Dimensional Convolutional Neural Network Pruning with Regularization-Based Method0
GD doesn't make the cut: Three ways that non-differentiability affects neural network training0
New Pruning Method Based on DenseNet Network for Image Classification0
To prune or not to prune : A chaos-causality approach to principled pruning of dense neural networks0
Towards Communication-Learning Trade-off for Federated Learning at the Network Edge0
Towards Compact and Robust Deep Neural Networks0
Towards Efficient Deep Spiking Neural Networks Construction with Spiking Activity based Pruning0
Towards Fairness-aware Adversarial Network Pruning0
Towards Higher Ranks via Adversarial Weight Pruning0
Towards Lightweight Graph Neural Network Search with Curriculum Graph Sparsification0
Towards Lightweight Neural Animation : Exploration of Neural Network Pruning in Mixture of Experts-based Animation Models0
Towards thinner convolutional neural networks through Gradually Global Pruning0
Network Pruning via Annealing and Direct Sparsity Control0
TraNNsformer: Neural network transformation for memristive crossbar based neuromorphic system design0
Troubleshooting Blind Image Quality Models in the Wild0
TSENOR: Highly-Efficient Algorithm for Finding Transposable N:M Sparse Masks0
Ultrafast Photorealistic Style Transfer via Neural Architecture Search0
Understanding Diversity Based Neural Network Pruning in Teacher Student Setup0
"Understanding Robustness Lottery": A Geometric Visual Comparative Analysis of Neural Network Pruning Approaches0
Unveiling Invariances via Neural Network Pruning0
Validating the Lottery Ticket Hypothesis with Inertial Manifold Theory0
Variational Convolutional Neural Network Pruning0
Verification of Neural Networks: Enhancing Scalability through Pruning0
Show:102550
← PrevPage 15 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50-2.3 GFLOPsAccuracy78.79Unverified
2ResNet50-1.5 GFLOPsAccuracy78.07Unverified
3ResNet50 2.5 GFLOPSAccuracy78Unverified
4RegX-1.6GAccuracy77.97Unverified
5ResNet50 2.0 GFLOPSAccuracy77.7Unverified
6ResNet50-3G FLOPsAccuracy77.1Unverified
7ResNet50-2G FLOPsAccuracy76.4Unverified
8ResNet50-1G FLOPsAccuracy76.38Unverified
9TAS-pruned ResNet-50Accuracy76.2Unverified
10ResNet50Accuracy75.59Unverified
#ModelMetricClaimedVerifiedStatus
1FeatherTop-1 Accuracy76.93Unverified
2SpartanTop-1 Accuracy76.17Unverified
3ST-3Top-1 Accuracy76.03Unverified
4AC/DCTop-1 Accuracy75.64Unverified
5CSTop-1 Accuracy75.5Unverified
6ProbMaskTop-1 Accuracy74.68Unverified
7STRTop-1 Accuracy74.31Unverified
8DNWTop-1 Accuracy74Unverified
9GMPTop-1 Accuracy73.91Unverified
#ModelMetricClaimedVerifiedStatus
1+U-DML*Inference Time (ms)675.56Unverified
2DenseAccuracy79Unverified
3AC/DCAccuracy78.2Unverified
4Beta-RankAccuracy74.01Unverified
5TAS-pruned ResNet-110Accuracy73.16Unverified
#ModelMetricClaimedVerifiedStatus
1TAS-pruned ResNet-110Accuracy94.33Unverified
2ShuffleNet – QuantisedInference Time (ms)23.15Unverified
3AlexNet – QuantisedInference Time (ms)5.23Unverified
4MobileNet – QuantisedInference Time (ms)4.74Unverified
#ModelMetricClaimedVerifiedStatus
1FFN-ShapleyPrunedAvg #Steps12.05Unverified