SOTAVerified

Network Pruning

Network Pruning is a popular approach to reduce a heavy network to obtain a light-weight form by removing redundancy in the heavy network. In this approach, a complex over-parameterized network is first trained, then pruned based on come criterions, and finally fine-tuned to achieve comparable performance with reduced parameters.

Source: Ensemble Knowledge Distillation for Learning Improved and Efficient Networks

Papers

Showing 351400 of 534 papers

TitleStatusHype
The Incredible Shrinking Neural Network: New Perspectives on Learning Representations Through The Lens of Pruning0
Pruning Before Training May Improve Generalization, Provably0
Three Dimensional Convolutional Neural Network Pruning with Regularization-Based Method0
GD doesn't make the cut: Three ways that non-differentiability affects neural network training0
New Pruning Method Based on DenseNet Network for Image Classification0
To prune or not to prune : A chaos-causality approach to principled pruning of dense neural networks0
Towards Communication-Learning Trade-off for Federated Learning at the Network Edge0
Towards Compact and Robust Deep Neural Networks0
Towards Efficient Deep Spiking Neural Networks Construction with Spiking Activity based Pruning0
Towards Fairness-aware Adversarial Network Pruning0
Towards Lightweight Graph Neural Network Search with Curriculum Graph Sparsification0
Towards Lightweight Neural Animation : Exploration of Neural Network Pruning in Mixture of Experts-based Animation Models0
Towards thinner convolutional neural networks through Gradually Global Pruning0
Network Pruning via Annealing and Direct Sparsity Control0
TraNNsformer: Neural network transformation for memristive crossbar based neuromorphic system design0
Troubleshooting Blind Image Quality Models in the Wild0
TSENOR: Highly-Efficient Algorithm for Finding Transposable N:M Sparse Masks0
Ultrafast Photorealistic Style Transfer via Neural Architecture Search0
Understanding Diversity Based Neural Network Pruning in Teacher Student Setup0
"Understanding Robustness Lottery": A Geometric Visual Comparative Analysis of Neural Network Pruning Approaches0
Unveiling Invariances via Neural Network Pruning0
Validating the Lottery Ticket Hypothesis with Inertial Manifold Theory0
Variational Convolutional Neural Network Pruning0
Verification of Neural Networks: Enhancing Scalability through Pruning0
Waste not, Want not: All-Alive Pruning for Extremely Sparse Networks0
Weight-dependent Gates for Network Pruning0
Weight Reparametrization for Budget-Aware Network Pruning0
When Are Neural Pruning Approximation Bounds Useful?0
When to Prune? A Policy towards Early Structural Pruning0
Why Does DARTS Miss the Target, and How Do We Aim to Fix It?0
Win the Lottery Ticket via Fourier Analysis: Frequencies Guided Network Pruning0
Hyperflows: Pruning Reveals the Importance of Weights0
Hyperparameter Optimization with Neural Network Pruning0
Hyperpruning: Efficient Search through Pruned Variants of Recurrent Neural Networks Leveraging Lyapunov Spectrum0
Importance Estimation with Random Gradient for Neural Network Pruning0
Improve Convolutional Neural Network Pruning by Maximizing Filter Variety0
Iteratively Training Look-Up Tables for Network Quantization0
Joint Regularization on Activations and Weights for Efficient Neural Network Pruning0
Knowledge Distillation Circumvents Nonlinearity for Optical Convolutional Neural Networks0
On the Landscape of One-hidden-layer Sparse Networks and Beyond0
Layer-adaptive Structured Pruning Guided by Latency0
LEAN: graph-based pruning for convolutional neural networks by extracting longest chains0
Learning ASR pathways: A sparse multilingual ASR model0
Learning Compact Neural Networks with Regularization0
Learning Efficient Image Super-Resolution Networks via Structure-Regularized Pruning0
LearningGroup: A Real-Time Sparse Training on FPGA via Learnable Weight Grouping for Multi-Agent Reinforcement Learning0
Learning Pruned Structure and Weights Simultaneously from Scratch: an Attention based Approach0
Learning Sparse Structured Ensembles with SG-MCMC and Network Pruning0
Less is More: The Influence of Pruning on the Explainability of CNNs0
Hierarchical Safety Realignment: Lightweight Restoration of Safety in Pruned Large Vision-Language ModelsCode0
Show:102550
← PrevPage 8 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50-2.3 GFLOPsAccuracy78.79Unverified
2ResNet50-1.5 GFLOPsAccuracy78.07Unverified
3ResNet50 2.5 GFLOPSAccuracy78Unverified
4RegX-1.6GAccuracy77.97Unverified
5ResNet50 2.0 GFLOPSAccuracy77.7Unverified
6ResNet50-3G FLOPsAccuracy77.1Unverified
7ResNet50-2G FLOPsAccuracy76.4Unverified
8ResNet50-1G FLOPsAccuracy76.38Unverified
9TAS-pruned ResNet-50Accuracy76.2Unverified
10ResNet50Accuracy75.59Unverified
#ModelMetricClaimedVerifiedStatus
1FeatherTop-1 Accuracy76.93Unverified
2SpartanTop-1 Accuracy76.17Unverified
3ST-3Top-1 Accuracy76.03Unverified
4AC/DCTop-1 Accuracy75.64Unverified
5CSTop-1 Accuracy75.5Unverified
6ProbMaskTop-1 Accuracy74.68Unverified
7STRTop-1 Accuracy74.31Unverified
8DNWTop-1 Accuracy74Unverified
9GMPTop-1 Accuracy73.91Unverified
#ModelMetricClaimedVerifiedStatus
1+U-DML*Inference Time (ms)675.56Unverified
2DenseAccuracy79Unverified
3AC/DCAccuracy78.2Unverified
4Beta-RankAccuracy74.01Unverified
5TAS-pruned ResNet-110Accuracy73.16Unverified
#ModelMetricClaimedVerifiedStatus
1TAS-pruned ResNet-110Accuracy94.33Unverified
2ShuffleNet – QuantisedInference Time (ms)23.15Unverified
3AlexNet – QuantisedInference Time (ms)5.23Unverified
4MobileNet – QuantisedInference Time (ms)4.74Unverified
#ModelMetricClaimedVerifiedStatus
1FFN-ShapleyPrunedAvg #Steps12.05Unverified