SOTAVerified

Network Pruning

Network Pruning is a popular approach to reduce a heavy network to obtain a light-weight form by removing redundancy in the heavy network. In this approach, a complex over-parameterized network is first trained, then pruned based on come criterions, and finally fine-tuned to achieve comparable performance with reduced parameters.

Source: Ensemble Knowledge Distillation for Learning Improved and Efficient Networks

Papers

Showing 401450 of 534 papers

TitleStatusHype
Continual Learning for Task-oriented Dialogue System with Iterative Network Pruning, Expanding and MaskingCode0
A Fair Loss Function for Network PruningCode0
PP-StructureV2: A Stronger Document Analysis SystemCode0
Importance Estimation for Neural Network PruningCode0
DASS: Differentiable Architecture Search for Sparse neural networksCode0
Adversarial Fine-tuning of Compressed Neural Networks for Joint Improvement of Robustness and EfficiencyCode0
Improving Generalization in Meta-Learning via Meta-Gradient AugmentationCode0
Improving the Transferability of Adversarial Examples via Direction TuningCode0
Interpretations Steered Network Pruning via Amortized Inferred Saliency MapsCode0
HALO: Learning to Prune Neural Networks with ShrinkageCode0
Investigating the Effect of Network Pruning on Performance and InterpretabilityCode0
Guiding Evolutionary AutoEncoder Training with Activation-Based Pruning OperatorsCode0
Is Complexity Required for Neural Network Pruning? A Case Study on Global Magnitude PruningCode0
The Other Side of Compression: Measuring Bias in Pruned TransformersCode0
Iterative Network Pruning with Uncertainty Regularization for Lifelong Sentiment ClassificationCode0
Global Magnitude Pruning With Minimum Threshold Is All We NeedCode0
A Systematic DNN Weight Pruning Framework using Alternating Direction Method of MultipliersCode0
GASL: Guided Attention for Sparsity Learning in Deep Neural NetworksCode0
The Search for Sparse, Robust Neural NetworksCode0
Few Sample Knowledge Distillation for Efficient Network CompressionCode0
Knowledge-Enhanced Semi-Supervised Federated Learning for Aggregating Heterogeneous Lightweight Clients in IoTCode0
Progressive Stochastic Binarization of Deep NetworksCode0
Efficient Model-Based Deep Learning via Network Pruning and Fine-TuningCode0
Filter Pruning for Efficient CNNs via Knowledge-driven Differential Filter SamplerCode0
L-CO-Net: Learned Condensation-Optimization Network for Clinical Parameter Estimation from Cardiac Cine MRICode0
LEAN: graph-based pruning for convolutional neural networks by extracting longest chainsCode0
A Signal Propagation Perspective for Pruning Neural Networks at InitializationCode0
SparseByteNN: A Novel Mobile Inference Acceleration Framework Based on Fine-Grained Group SparsityCode0
Are Straight-Through gradients and Soft-Thresholding all you need for Sparse Training?Code0
Filter Pruning For CNN With Enhanced Linear Representation RedundancyCode0
Connectivity Matters: Neural Network Pruning Through the Lens of Effective SparsityCode0
Pruning-aware Sparse Regularization for Network PruningCode0
Efficient Structured Pruning and Architecture Searching for Group ConvolutionCode0
Pruning by Explaining: A Novel Criterion for Deep Neural Network PruningCode0
Learning Sparse Networks Using Targeted DropoutCode0
Adaptive Search-and-Training for Robust and Efficient Network PruningCode0
Less is KEN: a Universal and Simple Non-Parametric Pruning Algorithm for Large Language ModelsCode0
A Quantization-Friendly Separable Convolution for MobileNetsCode0
Feature Selection for Multivariate Time Series via Network PruningCode0
Compression-aware Training of Neural Networks using Frank-WolfeCode0
Pruning deep neural networks generates a sparse, bio-inspired nonlinear controller for insect flightCode0
LLM-Rank: A Graph Theoretical Approach to Pruning Large Language ModelsCode0
Reproducibility Study: Comparing Rewinding and Fine-tuning in Neural Network PruningCode0
Pruning for Feature-Preserving Circuits in CNNsCode0
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test AccuracyCode0
Self-supervised Feature-Gate Coupling for Dynamic Network PruningCode0
FastDepth: Fast Monocular Depth Estimation on Embedded SystemsCode0
Pruning from ScratchCode0
Magnitude and Similarity based Variable Rate Filter Pruning for Efficient Convolution Neural NetworksCode0
Fast Convex Pruning of Deep Neural NetworksCode0
Show:102550
← PrevPage 9 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50-2.3 GFLOPsAccuracy78.79Unverified
2ResNet50-1.5 GFLOPsAccuracy78.07Unverified
3ResNet50 2.5 GFLOPSAccuracy78Unverified
4RegX-1.6GAccuracy77.97Unverified
5ResNet50 2.0 GFLOPSAccuracy77.7Unverified
6ResNet50-3G FLOPsAccuracy77.1Unverified
7ResNet50-2G FLOPsAccuracy76.4Unverified
8ResNet50-1G FLOPsAccuracy76.38Unverified
9TAS-pruned ResNet-50Accuracy76.2Unverified
10ResNet50Accuracy75.59Unverified
#ModelMetricClaimedVerifiedStatus
1FeatherTop-1 Accuracy76.93Unverified
2SpartanTop-1 Accuracy76.17Unverified
3ST-3Top-1 Accuracy76.03Unverified
4AC/DCTop-1 Accuracy75.64Unverified
5CSTop-1 Accuracy75.5Unverified
6ProbMaskTop-1 Accuracy74.68Unverified
7STRTop-1 Accuracy74.31Unverified
8DNWTop-1 Accuracy74Unverified
9GMPTop-1 Accuracy73.91Unverified
#ModelMetricClaimedVerifiedStatus
1+U-DML*Inference Time (ms)675.56Unverified
2DenseAccuracy79Unverified
3AC/DCAccuracy78.2Unverified
4Beta-RankAccuracy74.01Unverified
5TAS-pruned ResNet-110Accuracy73.16Unverified
#ModelMetricClaimedVerifiedStatus
1TAS-pruned ResNet-110Accuracy94.33Unverified
2ShuffleNet – QuantisedInference Time (ms)23.15Unverified
3AlexNet – QuantisedInference Time (ms)5.23Unverified
4MobileNet – QuantisedInference Time (ms)4.74Unverified
#ModelMetricClaimedVerifiedStatus
1FFN-ShapleyPrunedAvg #Steps12.05Unverified