SOTAVerified

Network Pruning

Network Pruning is a popular approach to reduce a heavy network to obtain a light-weight form by removing redundancy in the heavy network. In this approach, a complex over-parameterized network is first trained, then pruned based on come criterions, and finally fine-tuned to achieve comparable performance with reduced parameters.

Source: Ensemble Knowledge Distillation for Learning Improved and Efficient Networks

Papers

Showing 151200 of 534 papers

TitleStatusHype
Less is More: The Influence of Pruning on the Explainability of CNNs0
ST-MFNet Mini: Knowledge Distillation-Driven Frame InterpolationCode0
WHC: Weighted Hybrid Criterion for Filter Pruning on Convolutional Neural NetworksCode0
Pruning Deep Neural Networks from a Sparsity PerspectiveCode1
Adaptive Search-and-Training for Robust and Efficient Network PruningCode0
UPop: Unified and Progressive Pruning for Compressing Vision-Language TransformersCode1
DepGraph: Towards Any Structural PruningCode4
Efficient Latency-Aware CNN Depth Compression via Two-Stage Dynamic ProgrammingCode1
Certified Invertibility in Neural Networks via Mixed-Integer Programming0
Accelerating and Compressing Deep Neural Networks for Massive MIMO CSI FeedbackCode0
Getting Away with More Network Pruning: From Sparsity to Geometry and Linear Regions0
Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network PruningCode1
Pruning Compact ConvNets for Efficient Inference0
Cross-Model Comparative Loss for Enhancing Neuronal Utility in Language UnderstandingCode0
Towards Fairness-aware Adversarial Network Pruning0
Structural Alignment for Network Pruning through Partial Regularization0
Automatic Network Pruning via Hilbert-Schmidt Independence Criterion Lasso under Information Bottleneck PrincipleCode1
Pruning Before Training May Improve Generalization, Provably0
Magnitude and Similarity based Variable Rate Filter Pruning for Efficient Convolution Neural NetworksCode0
Can We Find Strong Lottery Tickets in Generative Models?0
AP: Selective Activation for De-sparsifying Pruned Neural Networks0
Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks0
Efficient Stein Variational Inference for Reliable Distribution-lossless Network Pruning0
Attend Who is Weak: Pruning-assisted Medical Image Localization under Sophisticated and Implicit Imbalances0
Distributed Pruning Towards Tiny Neural Networks in Federated Learning0
Are Straight-Through gradients and Soft-Thresholding all you need for Sparse Training?Code0
On Designing Light-Weight Object Trackers through Network Pruning: Use CNNs or Transformers?Code0
A Fair Loss Function for Network PruningCode0
Finding Skill Neurons in Pre-trained Transformer-based Language ModelsCode1
A pruning method based on the dissimilarity of angle among channels and filtersCode0
LearningGroup: A Real-Time Sparse Training on FPGA via Learnable Weight Grouping for Multi-Agent Reinforcement Learning0
PP-StructureV2: A Stronger Document Analysis SystemCode0
SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of AdaptersCode1
Energy Consumption of Neural Networks on NVIDIA Edge Boards: an Empirical Model0
Is Complexity Required for Neural Network Pruning? A Case Study on Global Magnitude PruningCode0
Neural Network Panning: Screening the Optimal Sparse Network Before TrainingCode0
Learning ASR pathways: A sparse multilingual ASR model0
One-shot Network Pruning at Initialization with Discriminative Image Patches0
CWP: Instance complexity weighted channel-wise soft masks for network pruning0
Interpretations Steered Network Pruning via Amortized Inferred Saliency MapsCode0
Complexity-Driven CNN Compression for Resource-constrained Edge AI0
N2NSkip: Learning Highly Sparse Networks using Neuron-to-Neuron Skip Connections0
Trainability Preserving Neural PruningCode1
DASS: Differentiable Architecture Search for Sparse neural networksCode0
Network Pruning via Feature Shift MinimizationCode0
Specializing Pre-trained Language Models for Better Relational Reasoning via Network PruningCode1
Studying the impact of magnitude pruning on contrastive learning methodsCode0
Renormalized Sparse Neural Network Pruning0
Winning the Lottery Ahead of Time: Efficient Early Network PruningCode1
Distortion-Aware Network Pruning and Feature Reuse for Real-time Video Segmentation0
Show:102550
← PrevPage 4 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50-2.3 GFLOPsAccuracy78.79Unverified
2ResNet50-1.5 GFLOPsAccuracy78.07Unverified
3ResNet50 2.5 GFLOPSAccuracy78Unverified
4RegX-1.6GAccuracy77.97Unverified
5ResNet50 2.0 GFLOPSAccuracy77.7Unverified
6ResNet50-3G FLOPsAccuracy77.1Unverified
7ResNet50-2G FLOPsAccuracy76.4Unverified
8ResNet50-1G FLOPsAccuracy76.38Unverified
9TAS-pruned ResNet-50Accuracy76.2Unverified
10ResNet50Accuracy75.59Unverified
#ModelMetricClaimedVerifiedStatus
1FeatherTop-1 Accuracy76.93Unverified
2SpartanTop-1 Accuracy76.17Unverified
3ST-3Top-1 Accuracy76.03Unverified
4AC/DCTop-1 Accuracy75.64Unverified
5CSTop-1 Accuracy75.5Unverified
6ProbMaskTop-1 Accuracy74.68Unverified
7STRTop-1 Accuracy74.31Unverified
8DNWTop-1 Accuracy74Unverified
9GMPTop-1 Accuracy73.91Unverified
#ModelMetricClaimedVerifiedStatus
1+U-DML*Inference Time (ms)675.56Unverified
2DenseAccuracy79Unverified
3AC/DCAccuracy78.2Unverified
4Beta-RankAccuracy74.01Unverified
5TAS-pruned ResNet-110Accuracy73.16Unverified
#ModelMetricClaimedVerifiedStatus
1TAS-pruned ResNet-110Accuracy94.33Unverified
2ShuffleNet – QuantisedInference Time (ms)23.15Unverified
3AlexNet – QuantisedInference Time (ms)5.23Unverified
4MobileNet – QuantisedInference Time (ms)4.74Unverified
#ModelMetricClaimedVerifiedStatus
1FFN-ShapleyPrunedAvg #Steps12.05Unverified