SOTAVerified

Network Pruning

Network Pruning is a popular approach to reduce a heavy network to obtain a light-weight form by removing redundancy in the heavy network. In this approach, a complex over-parameterized network is first trained, then pruned based on come criterions, and finally fine-tuned to achieve comparable performance with reduced parameters.

Source: Ensemble Knowledge Distillation for Learning Improved and Efficient Networks

Papers

Showing 251300 of 534 papers

TitleStatusHype
Adversarial Robustness of Distilled and Pruned Deep Learning-based Wireless Classifiers0
Importance Estimation with Random Gradient for Neural Network Pruning0
Improve Convolutional Neural Network Pruning by Maximizing Filter Variety0
ADMP: An Adversarial Double Masks Based Pruning Framework For Unsupervised Cross-Domain Compression0
Adaptive Neural Connections for Sparsity Learning0
Spectral Analysis for Semantic Segmentation with Applications on Feature Truncation and Weak Annotation0
Channel-wise pruning of neural networks with tapering resource constraint0
Heterogeneous Federated Learning using Dynamic Model Pruning and Adaptive Gradient0
Channel Planting for Deep Neural Networks using Knowledge Distillation0
SqueezerFaceNet: Reducing a Small Face Recognition CNN Even More Via Filter Pruning0
Iteratively Training Look-Up Tables for Network Quantization0
Accelerating Convolutional Neural Network Pruning via Spatial Aura Entropy0
Certified Invertibility in Neural Networks via Mixed-Integer Programming0
Joint Regularization on Activations and Weights for Efficient Neural Network Pruning0
Cascade Weight Shedding in Deep Neural Networks: Benefits and Pitfalls for Network Pruning0
Knowledge Distillation Circumvents Nonlinearity for Optical Convolutional Neural Networks0
Weight Reparametrization for Budget-Aware Network Pruning0
Streamlining Tensor and Network Pruning in PyTorch0
On the Landscape of One-hidden-layer Sparse Networks and Beyond0
Layer-adaptive Structured Pruning Guided by Latency0
CWP: Instance complexity weighted channel-wise soft masks for network pruning0
Structural Alignment for Network Pruning through Partial Regularization0
Structurally Prune Anything: Any Architecture, Any Framework, Any Time0
LEAN: graph-based pruning for convolutional neural networks by extracting longest chains0
Learning ASR pathways: A sparse multilingual ASR model0
Learning Compact Neural Networks with Regularization0
CAP: Context-Aware Pruning for Semantic-Segmentation0
Learning Efficient Image Super-Resolution Networks via Structure-Regularized Pruning0
LearningGroup: A Real-Time Sparse Training on FPGA via Learnable Weight Grouping for Multi-Agent Reinforcement Learning0
Accelerate CNNs from Three Dimensions: A Comprehensive Pruning Framework0
Learning Pruned Structure and Weights Simultaneously from Scratch: an Attention based Approach0
Structured Deep Neural Network Pruning via Matrix Pivoting0
Learning Sparse Structured Ensembles with SG-MCMC and Network Pruning0
Structured Network Pruning by Measuring Filter-wise Interactions0
Less is More: The Influence of Pruning on the Explainability of CNNs0
CAP-Context-Aware-Pruning-for-Semantic-Segmentation0
Linear Mode Connectivity in Sparse Neural Networks0
Lipschitz Constant Meets Condition Number: Learning Robust and Compact Deep Neural Networks0
Structured Pattern Pruning Using Regularization0
LNPT: Label-free Network Pruning and Training0
Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning0
Adaptive Consensus: A network pruning approach for decentralized optimization0
Can We Find Strong Lottery Tickets in Generative Models?0
Can network pruning benefit deep learning under label noise?0
Low-Rank Prune-And-Factorize for Language Model Compression0
Structured Pruning Meets Orthogonality0
C2S2: Cost-aware Channel Sparse Selection for Progressive Network Pruning0
Brain-Inspired Efficient Pruning: Exploiting Criticality in Spiking Neural Networks0
MARS: Multi-macro Architecture SRAM CIM-Based Accelerator with Co-designed Compressed Neural Networks0
MaskConvNet: Training Efficient ConvNets from Scratch via Budget-constrained Filter Pruning0
Show:102550
← PrevPage 6 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50-2.3 GFLOPsAccuracy78.79Unverified
2ResNet50-1.5 GFLOPsAccuracy78.07Unverified
3ResNet50 2.5 GFLOPSAccuracy78Unverified
4RegX-1.6GAccuracy77.97Unverified
5ResNet50 2.0 GFLOPSAccuracy77.7Unverified
6ResNet50-3G FLOPsAccuracy77.1Unverified
7ResNet50-2G FLOPsAccuracy76.4Unverified
8ResNet50-1G FLOPsAccuracy76.38Unverified
9TAS-pruned ResNet-50Accuracy76.2Unverified
10ResNet50Accuracy75.59Unverified
#ModelMetricClaimedVerifiedStatus
1FeatherTop-1 Accuracy76.93Unverified
2SpartanTop-1 Accuracy76.17Unverified
3ST-3Top-1 Accuracy76.03Unverified
4AC/DCTop-1 Accuracy75.64Unverified
5CSTop-1 Accuracy75.5Unverified
6ProbMaskTop-1 Accuracy74.68Unverified
7STRTop-1 Accuracy74.31Unverified
8DNWTop-1 Accuracy74Unverified
9GMPTop-1 Accuracy73.91Unverified
#ModelMetricClaimedVerifiedStatus
1+U-DML*Inference Time (ms)675.56Unverified
2DenseAccuracy79Unverified
3AC/DCAccuracy78.2Unverified
4Beta-RankAccuracy74.01Unverified
5TAS-pruned ResNet-110Accuracy73.16Unverified
#ModelMetricClaimedVerifiedStatus
1TAS-pruned ResNet-110Accuracy94.33Unverified
2ShuffleNet – QuantisedInference Time (ms)23.15Unverified
3AlexNet – QuantisedInference Time (ms)5.23Unverified
4MobileNet – QuantisedInference Time (ms)4.74Unverified
#ModelMetricClaimedVerifiedStatus
1FFN-ShapleyPrunedAvg #Steps12.05Unverified