SOTAVerified

Network Pruning

Network Pruning is a popular approach to reduce a heavy network to obtain a light-weight form by removing redundancy in the heavy network. In this approach, a complex over-parameterized network is first trained, then pruned based on come criterions, and finally fine-tuned to achieve comparable performance with reduced parameters.

Source: Ensemble Knowledge Distillation for Learning Improved and Efficient Networks

Papers

Showing 326350 of 534 papers

TitleStatusHype
Network Pruning for Low-Rank Binary Indexing0
Network Pruning Optimization by Simulated Annealing Algorithm0
Network Pruning Spaces0
AutoPruning for Deep Neural Network with Dynamic Channel Masking0
Adapting the Biological SSVEP Response to Artificial Neural Networks0
Pruning Before Training May Improve Generalization, Provably0
Automatic Sparse Connectivity Learning for Neural Networks0
When to Prune? A Policy towards Early Structural Pruning0
Neural Architecture Codesign for Fast Bragg Peak Analysis0
Why Does DARTS Miss the Target, and How Do We Aim to Fix It?0
Neural Network Compression via Effective Filter Analysis and Hierarchical Pruning0
Three Dimensional Convolutional Neural Network Pruning with Regularization-Based Method0
Neural Network Optimization for Reinforcement Learning Tasks Using Sparse Computations0
GD doesn't make the cut: Three ways that non-differentiability affects neural network training0
Neural Network Pruning as Spectrum Preserving Process0
Neural Network Pruning by Cooperative Coevolution0
New Pruning Method Based on DenseNet Network for Image Classification0
Neural Network Pruning for Real-time Polyp Segmentation0
Neural Network Pruning Through Constrained Reinforcement Learning0
To prune or not to prune : A chaos-causality approach to principled pruning of dense neural networks0
Automatic Pruning via Structured Lasso with Class-wise Information0
Towards Communication-Learning Trade-off for Federated Learning at the Network Edge0
Automatic Block-wise Pruning with Auxiliary Gating Structures for Deep Convolutional Neural Networks0
NISP: Pruning Networks using Neuron Importance Score Propagation0
Automated Model Compression by Jointly Applied Pruning and Quantization0
Show:102550
← PrevPage 14 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50-2.3 GFLOPsAccuracy78.79Unverified
2ResNet50-1.5 GFLOPsAccuracy78.07Unverified
3ResNet50 2.5 GFLOPSAccuracy78Unverified
4RegX-1.6GAccuracy77.97Unverified
5ResNet50 2.0 GFLOPSAccuracy77.7Unverified
6ResNet50-3G FLOPsAccuracy77.1Unverified
7ResNet50-2G FLOPsAccuracy76.4Unverified
8ResNet50-1G FLOPsAccuracy76.38Unverified
9TAS-pruned ResNet-50Accuracy76.2Unverified
10ResNet50Accuracy75.59Unverified
#ModelMetricClaimedVerifiedStatus
1FeatherTop-1 Accuracy76.93Unverified
2SpartanTop-1 Accuracy76.17Unverified
3ST-3Top-1 Accuracy76.03Unverified
4AC/DCTop-1 Accuracy75.64Unverified
5CSTop-1 Accuracy75.5Unverified
6ProbMaskTop-1 Accuracy74.68Unverified
7STRTop-1 Accuracy74.31Unverified
8DNWTop-1 Accuracy74Unverified
9GMPTop-1 Accuracy73.91Unverified
#ModelMetricClaimedVerifiedStatus
1+U-DML*Inference Time (ms)675.56Unverified
2DenseAccuracy79Unverified
3AC/DCAccuracy78.2Unverified
4Beta-RankAccuracy74.01Unverified
5TAS-pruned ResNet-110Accuracy73.16Unverified
#ModelMetricClaimedVerifiedStatus
1TAS-pruned ResNet-110Accuracy94.33Unverified
2ShuffleNet – QuantisedInference Time (ms)23.15Unverified
3AlexNet – QuantisedInference Time (ms)5.23Unverified
4MobileNet – QuantisedInference Time (ms)4.74Unverified
#ModelMetricClaimedVerifiedStatus
1FFN-ShapleyPrunedAvg #Steps12.05Unverified