SOTAVerified

Network Pruning

Network Pruning is a popular approach to reduce a heavy network to obtain a light-weight form by removing redundancy in the heavy network. In this approach, a complex over-parameterized network is first trained, then pruned based on come criterions, and finally fine-tuned to achieve comparable performance with reduced parameters.

Source: Ensemble Knowledge Distillation for Learning Improved and Efficient Networks

Papers

Showing 51100 of 534 papers

TitleStatusHype
Training Deep Neural Networks with Joint Quantization and Pruning of Weights and ActivationsCode1
Learning Compact Representations of Neural Networks using DiscriminAtive Masking (DAM)Code1
Revisit Kernel Pruning with Lottery Regulated Grouped ConvolutionsCode1
An Information Theory-inspired Strategy for Automatic Network PruningCode1
Group Fisher Pruning for Practical Network CompressionCode1
Manipulating Identical Filter Redundancy for Efficient Pruning on Deep and Complicated CNNCode1
M-FAC: Efficient Matrix-Free Approximations of Second-Order InformationCode1
AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural NetworksCode1
Sparse Training via Boosting Pruning Plasticity with NeuroregenerationCode1
Self-Damaging Contrastive LearningCode1
1xN Pattern for Pruning Convolutional Neural NetworksCode1
Network Pruning That Matters: A Case Study on Retraining VariantsCode1
Effective Sparsification of Neural Networks with Global Sparsity ConstraintCode1
Lottery Jackpots Exist in Pre-trained ModelsCode1
EfficientTDNN: Efficient Architecture Search for Speaker RecognitionCode1
Recent Advances on Neural Network Pruning at InitializationCode1
Manifold Regularized Dynamic Network PruningCode1
Network Pruning via Resource ReallocationCode1
Topology-Aware Network Pruning using Multi-stage Graph Embedding and Reinforcement LearningCode1
Network Pruning using Adaptive Exemplar FiltersCode1
Neural Pruning via Growing RegularizationCode1
Neuron Merging: Compensating for Pruned NeuronsCode1
SCOP: Scientific Control for Reliable Neural Network PruningCode1
Layer-adaptive sparsity for the Magnitude-based PruningCode1
Advanced Dropout: A Model-free Methodology for Bayesian Dropout OptimizationCode1
A Gradient Flow Framework For Analyzing Network PruningCode1
Sanity-Checking Pruning Methods: Random Tickets can Win the JackpotCode1
Accelerating Sparse DNN Models without Hardware-Support via Tile-Wise SparsityCode1
EagleEye: Fast Sub-net Evaluation for Efficient Neural Network PruningCode1
Paying more attention to snapshots of Iterative Pruning: Improving Model Compression via Ensemble DistillationCode1
Weight Pruning via Adaptive Sparsity LossCode1
Shapley Value as Principled Metric for Structured Network PruningCode1
MicroNet for Efficient Language ModelingCode1
Movement Pruning: Adaptive Sparsity by Fine-TuningCode1
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked LayersCode1
Network Adjustment: Channel Search Guided by FLOPs Utilization RatioCode1
DHP: Differentiable Meta Pruning via HyperNetworksCode1
How Not to Give a FLOP: Combining Regularization and Pruning for Efficient InferenceCode1
Similarity of Neural Networks with GradientsCode1
What is the State of Neural Network Pruning?Code1
Comparing Rewinding and Fine-tuning in Neural Network PruningCode1
Good Subnetworks Provably Exist: Pruning via Greedy Forward SelectionCode1
HYDRA: Pruning Adversarially Robust Neural NetworksCode1
HRank: Filter Pruning using High-Rank Feature MapCode1
Knapsack Pruning with Inner DistillationCode1
Picking Winning Tickets Before Training by Preserving Gradient FlowCode1
Soft Threshold Weight Reparameterization for Learnable SparsityCode1
Filter Sketch for Network PruningCode1
Quantisation and Pruning for Neural Network Compression and RegularisationCode1
Sparse Weight Activation TrainingCode1
Show:102550
← PrevPage 2 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50-2.3 GFLOPsAccuracy78.79Unverified
2ResNet50-1.5 GFLOPsAccuracy78.07Unverified
3ResNet50 2.5 GFLOPSAccuracy78Unverified
4RegX-1.6GAccuracy77.97Unverified
5ResNet50 2.0 GFLOPSAccuracy77.7Unverified
6ResNet50-3G FLOPsAccuracy77.1Unverified
7ResNet50-2G FLOPsAccuracy76.4Unverified
8ResNet50-1G FLOPsAccuracy76.38Unverified
9TAS-pruned ResNet-50Accuracy76.2Unverified
10ResNet50Accuracy75.59Unverified
#ModelMetricClaimedVerifiedStatus
1FeatherTop-1 Accuracy76.93Unverified
2SpartanTop-1 Accuracy76.17Unverified
3ST-3Top-1 Accuracy76.03Unverified
4AC/DCTop-1 Accuracy75.64Unverified
5CSTop-1 Accuracy75.5Unverified
6ProbMaskTop-1 Accuracy74.68Unverified
7STRTop-1 Accuracy74.31Unverified
8DNWTop-1 Accuracy74Unverified
9GMPTop-1 Accuracy73.91Unverified
#ModelMetricClaimedVerifiedStatus
1+U-DML*Inference Time (ms)675.56Unverified
2DenseAccuracy79Unverified
3AC/DCAccuracy78.2Unverified
4Beta-RankAccuracy74.01Unverified
5TAS-pruned ResNet-110Accuracy73.16Unverified
#ModelMetricClaimedVerifiedStatus
1TAS-pruned ResNet-110Accuracy94.33Unverified
2ShuffleNet – QuantisedInference Time (ms)23.15Unverified
3AlexNet – QuantisedInference Time (ms)5.23Unverified
4MobileNet – QuantisedInference Time (ms)4.74Unverified
#ModelMetricClaimedVerifiedStatus
1FFN-ShapleyPrunedAvg #Steps12.05Unverified