SOTAVerified

Network Pruning

Network Pruning is a popular approach to reduce a heavy network to obtain a light-weight form by removing redundancy in the heavy network. In this approach, a complex over-parameterized network is first trained, then pruned based on come criterions, and finally fine-tuned to achieve comparable performance with reduced parameters.

Source: Ensemble Knowledge Distillation for Learning Improved and Efficient Networks

Papers

Showing 326350 of 534 papers

TitleStatusHype
RGP: Neural Network Pruning through Its Regular Graph Structure0
Robustness-Reinforced Knowledge Distillation with Correlation Distance and Network Pruning0
Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural Networks0
Scalable iterative pruning of large language and vision models using block coordinate descent0
SCNN: An Accelerator for Compressed-sparse Convolutional Neural Networks0
Selective Brain Damage: Measuring the Disparate Impact of Model Pruning0
Self-Adaptive Network Pruning0
SequentialAttention++ for Block Sparsification: Differentiable Pruning Meets Combinatorial Optimization0
Signal Collapse in One-Shot Pruning: When Sparse Models Fail to Distinguish Neural Representations0
Single-shot Channel Pruning Based on Alternating Direction Method of Multipliers0
Small Contributions, Small Networks: Efficient Neural Network Pruning Based on Relative Importance0
SMOF: Squeezing More Out of Filters Yields Hardware-Friendly CNN Pruning0
Softer Pruning, Incremental Regularization0
SparseRT: Accelerating Unstructured Sparsity on GPUs for Deep Learning Inference0
SqueezerFaceNet: Reducing a Small Face Recognition CNN Even More Via Filter Pruning0
Streamlining Tensor and Network Pruning in PyTorch0
Structural Alignment for Network Pruning through Partial Regularization0
Structurally Prune Anything: Any Architecture, Any Framework, Any Time0
Structured Deep Neural Network Pruning via Matrix Pivoting0
Structured Network Pruning by Measuring Filter-wise Interactions0
Structured Pattern Pruning Using Regularization0
Structured Pruning Meets Orthogonality0
Structured Pruning of Recurrent Neural Networks through Neuron Selection0
Surrogate Lagrangian Relaxation: A Path To Retrain-free Deep Neural Network Pruning0
The Generalization-Stability Tradeoff In Neural Network Pruning0
Show:102550
← PrevPage 14 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50-2.3 GFLOPsAccuracy78.79Unverified
2ResNet50-1.5 GFLOPsAccuracy78.07Unverified
3ResNet50 2.5 GFLOPSAccuracy78Unverified
4RegX-1.6GAccuracy77.97Unverified
5ResNet50 2.0 GFLOPSAccuracy77.7Unverified
6ResNet50-3G FLOPsAccuracy77.1Unverified
7ResNet50-2G FLOPsAccuracy76.4Unverified
8ResNet50-1G FLOPsAccuracy76.38Unverified
9TAS-pruned ResNet-50Accuracy76.2Unverified
10ResNet50Accuracy75.59Unverified
#ModelMetricClaimedVerifiedStatus
1FeatherTop-1 Accuracy76.93Unverified
2SpartanTop-1 Accuracy76.17Unverified
3ST-3Top-1 Accuracy76.03Unverified
4AC/DCTop-1 Accuracy75.64Unverified
5CSTop-1 Accuracy75.5Unverified
6ProbMaskTop-1 Accuracy74.68Unverified
7STRTop-1 Accuracy74.31Unverified
8DNWTop-1 Accuracy74Unverified
9GMPTop-1 Accuracy73.91Unverified
#ModelMetricClaimedVerifiedStatus
1+U-DML*Inference Time (ms)675.56Unverified
2DenseAccuracy79Unverified
3AC/DCAccuracy78.2Unverified
4Beta-RankAccuracy74.01Unverified
5TAS-pruned ResNet-110Accuracy73.16Unverified
#ModelMetricClaimedVerifiedStatus
1TAS-pruned ResNet-110Accuracy94.33Unverified
2ShuffleNet – QuantisedInference Time (ms)23.15Unverified
3AlexNet – QuantisedInference Time (ms)5.23Unverified
4MobileNet – QuantisedInference Time (ms)4.74Unverified
#ModelMetricClaimedVerifiedStatus
1FFN-ShapleyPrunedAvg #Steps12.05Unverified