SOTAVerified

Network Pruning

Network Pruning is a popular approach to reduce a heavy network to obtain a light-weight form by removing redundancy in the heavy network. In this approach, a complex over-parameterized network is first trained, then pruned based on come criterions, and finally fine-tuned to achieve comparable performance with reduced parameters.

Source: Ensemble Knowledge Distillation for Learning Improved and Efficient Networks

Papers

Showing 201250 of 534 papers

TitleStatusHype
Model Compression Methods for YOLOv5: A Review0
SqueezerFaceNet: Reducing a Small Face Recognition CNN Even More Via Filter Pruning0
Neural Network Pruning as Spectrum Preserving Process0
Distilled Pruning: Using Synthetic Data to Win the LotteryCode0
Structured Network Pruning by Measuring Filter-wise Interactions0
Filter Pruning for Efficient CNNs via Knowledge-driven Differential Filter SamplerCode0
Low-Rank Prune-And-Factorize for Language Model Compression0
Neural Network Pruning for Real-time Polyp Segmentation0
Representation and decomposition of functions in DAG-DNNs and structural network pruning0
Improving Generalization in Meta-Learning via Meta-Gradient AugmentationCode0
Resource Efficient Neural Networks Using Hessian Based Pruning0
Does a sparse ReLU network training problem always admit an optimum?0
Scaling Up Semi-supervised Learning with Unconstrained Unlabelled DataCode0
Layer-adaptive Structured Pruning Guided by Latency0
Combining Multi-Objective Bayesian Optimization with Reinforcement Learning for TinyML0
Probabilistic Modeling: Proving the Lottery Ticket Hypothesis in Spiking Neural Network0
Concept-Monitor: Understanding DNN training through individual neurons0
Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures0
Network Pruning Spaces0
Model Pruning Enables Localized and Efficient Federated Learning for Yield Forecasting and Data Sharing0
Beta-Rank: A Robust Convolutional Filter Pruning Method For Imbalanced Medical Image AnalysisCode0
DIPNet: Efficiency Distillation and Iterative Pruning for Image Super-Resolution0
Surrogate Lagrangian Relaxation: A Path To Retrain-free Deep Neural Network Pruning0
The Other Side of Compression: Measuring Bias in Pruned TransformersCode0
A Multi-objective Complex Network Pruning Framework Based on Divide-and-conquer and Global Performance Impairment Ranking0
Improving the Transferability of Adversarial Examples via Direction TuningCode0
Protective Self-Adaptive Pruning to Better Compress DNNs0
Differential Privacy Meets Neural Network Pruning0
Knowledge-Enhanced Semi-Supervised Federated Learning for Aggregating Heterogeneous Lightweight Clients in IoTCode0
Parameter Sharing with Network Pruning for Scalable Multi-Agent Deep Reinforcement Learning0
Fast as CHITA: Neural Network Pruning with Combinatorial Optimization0
Less is More: The Influence of Pruning on the Explainability of CNNs0
WHC: Weighted Hybrid Criterion for Filter Pruning on Convolutional Neural NetworksCode0
ST-MFNet Mini: Knowledge Distillation-Driven Frame InterpolationCode0
Adaptive Search-and-Training for Robust and Efficient Network PruningCode0
Certified Invertibility in Neural Networks via Mixed-Integer Programming0
Accelerating and Compressing Deep Neural Networks for Massive MIMO CSI FeedbackCode0
Getting Away with More Network Pruning: From Sparsity to Geometry and Linear Regions0
Pruning Compact ConvNets for Efficient Inference0
Cross-Model Comparative Loss for Enhancing Neuronal Utility in Language UnderstandingCode0
Pruning Before Training May Improve Generalization, Provably0
Towards Fairness-aware Adversarial Network Pruning0
Structural Alignment for Network Pruning through Partial Regularization0
Magnitude and Similarity based Variable Rate Filter Pruning for Efficient Convolution Neural NetworksCode0
Can We Find Strong Lottery Tickets in Generative Models?0
AP: Selective Activation for De-sparsifying Pruned Neural Networks0
Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks0
Efficient Stein Variational Inference for Reliable Distribution-lossless Network Pruning0
Attend Who is Weak: Pruning-assisted Medical Image Localization under Sophisticated and Implicit Imbalances0
Distributed Pruning Towards Tiny Neural Networks in Federated Learning0
Show:102550
← PrevPage 5 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50-2.3 GFLOPsAccuracy78.79Unverified
2ResNet50-1.5 GFLOPsAccuracy78.07Unverified
3ResNet50 2.5 GFLOPSAccuracy78Unverified
4RegX-1.6GAccuracy77.97Unverified
5ResNet50 2.0 GFLOPSAccuracy77.7Unverified
6ResNet50-3G FLOPsAccuracy77.1Unverified
7ResNet50-2G FLOPsAccuracy76.4Unverified
8ResNet50-1G FLOPsAccuracy76.38Unverified
9TAS-pruned ResNet-50Accuracy76.2Unverified
10ResNet50Accuracy75.59Unverified
#ModelMetricClaimedVerifiedStatus
1FeatherTop-1 Accuracy76.93Unverified
2SpartanTop-1 Accuracy76.17Unverified
3ST-3Top-1 Accuracy76.03Unverified
4AC/DCTop-1 Accuracy75.64Unverified
5CSTop-1 Accuracy75.5Unverified
6ProbMaskTop-1 Accuracy74.68Unverified
7STRTop-1 Accuracy74.31Unverified
8DNWTop-1 Accuracy74Unverified
9GMPTop-1 Accuracy73.91Unverified
#ModelMetricClaimedVerifiedStatus
1+U-DML*Inference Time (ms)675.56Unverified
2DenseAccuracy79Unverified
3AC/DCAccuracy78.2Unverified
4Beta-RankAccuracy74.01Unverified
5TAS-pruned ResNet-110Accuracy73.16Unverified
#ModelMetricClaimedVerifiedStatus
1TAS-pruned ResNet-110Accuracy94.33Unverified
2ShuffleNet – QuantisedInference Time (ms)23.15Unverified
3AlexNet – QuantisedInference Time (ms)5.23Unverified
4MobileNet – QuantisedInference Time (ms)4.74Unverified
#ModelMetricClaimedVerifiedStatus
1FFN-ShapleyPrunedAvg #Steps12.05Unverified