SOTAVerified

Network Pruning

Network Pruning is a popular approach to reduce a heavy network to obtain a light-weight form by removing redundancy in the heavy network. In this approach, a complex over-parameterized network is first trained, then pruned based on come criterions, and finally fine-tuned to achieve comparable performance with reduced parameters.

Source: Ensemble Knowledge Distillation for Learning Improved and Efficient Networks

Papers

Showing 51100 of 534 papers

TitleStatusHype
1xN Pattern for Pruning Convolutional Neural NetworksCode1
Pruning vs Quantization: Which is Better?Code1
Adversarial Pruning: A Survey and Benchmark of Pruning Methods for Adversarial RobustnessCode1
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for FreeCode1
AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural NetworksCode1
Sanity-Checking Pruning Methods: Random Tickets can Win the JackpotCode1
Self-Damaging Contrastive LearningCode1
Efficient Latency-Aware CNN Depth Compression via Two-Stage Dynamic ProgrammingCode1
SNIP: Single-shot Network Pruning based on Connection SensitivityCode1
Soft Threshold Weight Reparameterization for Learnable SparsityCode1
Sparse Double Descent: Where Network Pruning Aggravates OverfittingCode1
Sparse Training via Boosting Pruning Plasticity with NeuroregenerationCode1
A Three-regime Model of Network PruningCode1
DHP: Differentiable Meta Pruning via HyperNetworksCode1
Discrimination-aware Network Pruning for Deep Model CompressionCode1
Discovering Neural WiringsCode1
Dynamic Channel Pruning: Feature Boosting and SuppressionCode1
Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMsCode1
A Gradient Flow Framework For Analyzing Network PruningCode1
EagleEye: Fast Sub-net Evaluation for Efficient Neural Network PruningCode1
DominoSearch: Find layer-wise fine-grained N:M sparse schemes from dense neural networksCode1
Recent Advances on Neural Network Pruning at InitializationCode1
Feather: An Elegant Solution to Effective DNN SparsificationCode1
Filter-Pruning of Lightweight Face Detectors Using a Geometric Median CriterionCode1
Aligned Structured Sparsity Learning for Efficient Image Super-ResolutionCode1
Automatic Network Pruning via Hilbert-Schmidt Independence Criterion Lasso under Information Bottleneck PrincipleCode1
Automatic Neural Network Pruning that Efficiently Preserves the Model AccuracyCode1
Fluctuation-based Adaptive Structured Pruning for Large Language ModelsCode1
How Sparse Can We Prune A Deep Network: A Fundamental Limit ViewpointCode1
HRank: Filter Pruning using High-Rank Feature MapCode1
Is C4 Dataset Optimal for Pruning? An Investigation of Calibration Data for LLM PruningCode1
Auto-Train-Once: Controller Network Guided Automatic Network Pruning from ScratchCode1
Channel Gating Neural NetworksCode1
Efficient and Flexible Neural Network Training through Layer-wise Feedback PropagationCode1
M-FAC: Efficient Matrix-Free Approximations of Second-Order InformationCode1
Lottery Jackpots Exist in Pre-trained ModelsCode1
How Not to Give a FLOP: Combining Regularization and Pruning for Efficient InferenceCode1
Beyond Size: How Gradients Shape Pruning Decisions in Large Language ModelsCode1
MicroNet for Efficient Language ModelingCode1
Movement Pruning: Adaptive Sparsity by Fine-TuningCode1
Layer-adaptive sparsity for the Magnitude-based PruningCode1
Network Pruning via Resource ReallocationCode1
An Information Theory-inspired Strategy for Automatic Network PruningCode1
Neuron Merging: Compensating for Pruned NeuronsCode1
Accelerating Sparse DNN Models without Hardware-Support via Tile-Wise SparsityCode1
Comparing Rewinding and Fine-tuning in Neural Network PruningCode1
APP: Anytime Progressive PruningCode1
Paying more attention to snapshots of Iterative Pruning: Improving Model Compression via Ensemble DistillationCode1
Advanced Dropout: A Model-free Methodology for Bayesian Dropout OptimizationCode1
Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High SparsityCode1
Show:102550
← PrevPage 2 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50-2.3 GFLOPsAccuracy78.79Unverified
2ResNet50-1.5 GFLOPsAccuracy78.07Unverified
3ResNet50 2.5 GFLOPSAccuracy78Unverified
4RegX-1.6GAccuracy77.97Unverified
5ResNet50 2.0 GFLOPSAccuracy77.7Unverified
6ResNet50-3G FLOPsAccuracy77.1Unverified
7ResNet50-2G FLOPsAccuracy76.4Unverified
8ResNet50-1G FLOPsAccuracy76.38Unverified
9TAS-pruned ResNet-50Accuracy76.2Unverified
10ResNet50Accuracy75.59Unverified
#ModelMetricClaimedVerifiedStatus
1FeatherTop-1 Accuracy76.93Unverified
2SpartanTop-1 Accuracy76.17Unverified
3ST-3Top-1 Accuracy76.03Unverified
4AC/DCTop-1 Accuracy75.64Unverified
5CSTop-1 Accuracy75.5Unverified
6ProbMaskTop-1 Accuracy74.68Unverified
7STRTop-1 Accuracy74.31Unverified
8DNWTop-1 Accuracy74Unverified
9GMPTop-1 Accuracy73.91Unverified
#ModelMetricClaimedVerifiedStatus
1+U-DML*Inference Time (ms)675.56Unverified
2DenseAccuracy79Unverified
3AC/DCAccuracy78.2Unverified
4Beta-RankAccuracy74.01Unverified
5TAS-pruned ResNet-110Accuracy73.16Unverified
#ModelMetricClaimedVerifiedStatus
1TAS-pruned ResNet-110Accuracy94.33Unverified
2ShuffleNet – QuantisedInference Time (ms)23.15Unverified
3AlexNet – QuantisedInference Time (ms)5.23Unverified
4MobileNet – QuantisedInference Time (ms)4.74Unverified
#ModelMetricClaimedVerifiedStatus
1FFN-ShapleyPrunedAvg #Steps12.05Unverified