SOTAVerified

Network Pruning

Network Pruning is a popular approach to reduce a heavy network to obtain a light-weight form by removing redundancy in the heavy network. In this approach, a complex over-parameterized network is first trained, then pruned based on come criterions, and finally fine-tuned to achieve comparable performance with reduced parameters.

Source: Ensemble Knowledge Distillation for Learning Improved and Efficient Networks

Papers

Showing 226250 of 534 papers

TitleStatusHype
Automatic Sparse Connectivity Learning for Neural Networks0
Fusion-Catalyzed Pruning for Optimizing Deep Learning on Intelligent Edge Devices0
Coarse and fine-grained automatic cropping deep convolutional neural network0
GDP: Stabilized Neural Network Pruning via Gates with Differentiable Polarization0
Getting Away with More Network Pruning: From Sparsity to Geometry and Linear Regions0
Differential Privacy Meets Neural Network Pruning0
Adaptive Neural Connections for Sparsity Learning0
Learning Compact Neural Networks with Regularization0
GPU Acceleration of Sparse Neural Networks0
LearningGroup: A Real-Time Sparse Training on FPGA via Learnable Weight Grouping for Multi-Agent Reinforcement Learning0
Graph Attention Network based Pruning for Reconstructing 3D Liver Vessel Morphology from Contrasted CT Images0
Explicit Group Sparse Projection with Applications to Deep Learning and NMF0
Differentiable Network Pruning for Microcontrollers0
Compact Neural Representation Using Attentive Network Pruning0
Hierarchical Action Classification with Network Pruning0
Differentiable Channel Sparsity Search via Weight Sharing within Filters0
Hierarchical Human Action Classification with Network Pruning0
Automatic Pruning via Structured Lasso with Class-wise Information0
Connection Sensitivity Matters for Training-free DARTS: From Architecture-Level Scoring to Operation-Level Sensitivity Analysis0
A Lottery Ticket Hypothesis Framework for Low-Complexity Device-Robust Neural Acoustic Scene Classification0
Automatic Block-wise Pruning with Auxiliary Gating Structures for Deep Convolutional Neural Networks0
Hyperflows: Pruning Reveals the Importance of Weights0
Hyperparameter Optimization with Neural Network Pruning0
Hyperpruning: Efficient Search through Pruned Variants of Recurrent Neural Networks Leveraging Lyapunov Spectrum0
Heterogeneous Federated Learning using Dynamic Model Pruning and Adaptive Gradient0
Show:102550
← PrevPage 10 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50-2.3 GFLOPsAccuracy78.79Unverified
2ResNet50-1.5 GFLOPsAccuracy78.07Unverified
3ResNet50 2.5 GFLOPSAccuracy78Unverified
4RegX-1.6GAccuracy77.97Unverified
5ResNet50 2.0 GFLOPSAccuracy77.7Unverified
6ResNet50-3G FLOPsAccuracy77.1Unverified
7ResNet50-2G FLOPsAccuracy76.4Unverified
8ResNet50-1G FLOPsAccuracy76.38Unverified
9TAS-pruned ResNet-50Accuracy76.2Unverified
10ResNet50Accuracy75.59Unverified
#ModelMetricClaimedVerifiedStatus
1FeatherTop-1 Accuracy76.93Unverified
2SpartanTop-1 Accuracy76.17Unverified
3ST-3Top-1 Accuracy76.03Unverified
4AC/DCTop-1 Accuracy75.64Unverified
5CSTop-1 Accuracy75.5Unverified
6ProbMaskTop-1 Accuracy74.68Unverified
7STRTop-1 Accuracy74.31Unverified
8DNWTop-1 Accuracy74Unverified
9GMPTop-1 Accuracy73.91Unverified
#ModelMetricClaimedVerifiedStatus
1+U-DML*Inference Time (ms)675.56Unverified
2DenseAccuracy79Unverified
3AC/DCAccuracy78.2Unverified
4Beta-RankAccuracy74.01Unverified
5TAS-pruned ResNet-110Accuracy73.16Unverified
#ModelMetricClaimedVerifiedStatus
1TAS-pruned ResNet-110Accuracy94.33Unverified
2ShuffleNet – QuantisedInference Time (ms)23.15Unverified
3AlexNet – QuantisedInference Time (ms)5.23Unverified
4MobileNet – QuantisedInference Time (ms)4.74Unverified
#ModelMetricClaimedVerifiedStatus
1FFN-ShapleyPrunedAvg #Steps12.05Unverified