SOTAVerified

Network Pruning

Network Pruning is a popular approach to reduce a heavy network to obtain a light-weight form by removing redundancy in the heavy network. In this approach, a complex over-parameterized network is first trained, then pruned based on come criterions, and finally fine-tuned to achieve comparable performance with reduced parameters.

Source: Ensemble Knowledge Distillation for Learning Improved and Efficient Networks

Papers

Showing 101150 of 534 papers

TitleStatusHype
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked LayersCode1
Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMsCode1
1xN Pattern for Pruning Convolutional Neural NetworksCode1
Efficient and Flexible Neural Network Training through Layer-wise Feedback PropagationCode1
Effective Sparsification of Neural Networks with Global Sparsity ConstraintCode1
ReplaceMe: Network Simplification via Layer Pruning and Linear TransformationsCode1
Adversarial Pruning: A Survey and Benchmark of Pruning Methods for Adversarial RobustnessCode1
Channel Gating Neural NetworksCode1
Structured Sparsification with Joint Optimization of Group Convolution and Channel ShuffleCode0
A flexible, extensible software framework for model compression based on the LC algorithmCode0
Reproducibility Study: Comparing Rewinding and Fine-tuning in Neural Network PruningCode0
Efficient Model-Based Deep Learning via Network Pruning and Fine-TuningCode0
Max-Affine Spline Insights Into Deep Network PruningCode0
Network Compression via Central FilterCode0
A Fair Loss Function for Network PruningCode0
Compact Bayesian Neural Networks via pruned MCMC samplingCode0
Cogradient Descent for Dependable LearningCode0
A Signal Propagation Perspective for Pruning Neural Networks at InitializationCode0
Magnitude and Similarity based Variable Rate Filter Pruning for Efficient Convolution Neural NetworksCode0
LLM-Rank: A Graph Theoretical Approach to Pruning Large Language ModelsCode0
Are Straight-Through gradients and Soft-Thresholding all you need for Sparse Training?Code0
ABCP: Automatic Block-wise and Channel-wise Network Pruning via Joint SearchCode0
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test AccuracyCode0
Clustering Convolutional Kernels to Compress Deep Neural NetworksCode0
LEAN: graph-based pruning for convolutional neural networks by extracting longest chainsCode0
Model compression as constrained optimization, with application to neural nets. Part V: combining compressionsCode0
L-CO-Net: Learned Condensation-Optimization Network for Clinical Parameter Estimation from Cardiac Cine MRICode0
CFSP: An Efficient Structured Pruning Framework for LLMs with Coarse-to-Fine Activation InformationCode0
Efficient Structured Pruning and Architecture Searching for Group ConvolutionCode0
A Quantization-Friendly Separable Convolution for MobileNetsCode0
Causal Explanation of Convolutional Neural NetworksCode0
Knowledge-Enhanced Semi-Supervised Federated Learning for Aggregating Heterogeneous Lightweight Clients in IoTCode0
Adversarial Fine-tuning of Compressed Neural Networks for Joint Improvement of Robustness and EfficiencyCode0
Few Sample Knowledge Distillation for Efficient Network CompressionCode0
Learning Sparse Networks Using Targeted DropoutCode0
Is Complexity Required for Neural Network Pruning? A Case Study on Global Magnitude PruningCode0
A pruning method based on the dissimilarity of angle among channels and filtersCode0
Iterative Network Pruning with Uncertainty Regularization for Lifelong Sentiment ClassificationCode0
Can pruning improve certified robustness of neural networks?Code0
Interpretations Steered Network Pruning via Amortized Inferred Saliency MapsCode0
Building Efficient ConvNets using Redundant Feature PruningCode0
Improving the Transferability of Adversarial Examples via Direction TuningCode0
Investigating the Effect of Network Pruning on Performance and InterpretabilityCode0
Less is KEN: a Universal and Simple Non-Parametric Pruning Algorithm for Large Language ModelsCode0
Network Pruning via Feature Shift MinimizationCode0
Boosting Large Language Models with Mask Fine-TuningCode0
HALO: Learning to Prune Neural Networks with ShrinkageCode0
Hierarchical Safety Realignment: Lightweight Restoration of Safety in Pruned Large Vision-Language ModelsCode0
Adaptive Search-and-Training for Robust and Efficient Network PruningCode0
B-FPGM: Lightweight Face Detection via Bayesian-Optimized Soft FPGM PruningCode0
Show:102550
← PrevPage 3 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50-2.3 GFLOPsAccuracy78.79Unverified
2ResNet50-1.5 GFLOPsAccuracy78.07Unverified
3ResNet50 2.5 GFLOPSAccuracy78Unverified
4RegX-1.6GAccuracy77.97Unverified
5ResNet50 2.0 GFLOPSAccuracy77.7Unverified
6ResNet50-3G FLOPsAccuracy77.1Unverified
7ResNet50-2G FLOPsAccuracy76.4Unverified
8ResNet50-1G FLOPsAccuracy76.38Unverified
9TAS-pruned ResNet-50Accuracy76.2Unverified
10ResNet50Accuracy75.59Unverified
#ModelMetricClaimedVerifiedStatus
1FeatherTop-1 Accuracy76.93Unverified
2SpartanTop-1 Accuracy76.17Unverified
3ST-3Top-1 Accuracy76.03Unverified
4AC/DCTop-1 Accuracy75.64Unverified
5CSTop-1 Accuracy75.5Unverified
6ProbMaskTop-1 Accuracy74.68Unverified
7STRTop-1 Accuracy74.31Unverified
8DNWTop-1 Accuracy74Unverified
9GMPTop-1 Accuracy73.91Unverified
#ModelMetricClaimedVerifiedStatus
1+U-DML*Inference Time (ms)675.56Unverified
2DenseAccuracy79Unverified
3AC/DCAccuracy78.2Unverified
4Beta-RankAccuracy74.01Unverified
5TAS-pruned ResNet-110Accuracy73.16Unverified
#ModelMetricClaimedVerifiedStatus
1TAS-pruned ResNet-110Accuracy94.33Unverified
2ShuffleNet – QuantisedInference Time (ms)23.15Unverified
3AlexNet – QuantisedInference Time (ms)5.23Unverified
4MobileNet – QuantisedInference Time (ms)4.74Unverified
#ModelMetricClaimedVerifiedStatus
1FFN-ShapleyPrunedAvg #Steps12.05Unverified