SOTAVerified

Network Pruning

Network Pruning is a popular approach to reduce a heavy network to obtain a light-weight form by removing redundancy in the heavy network. In this approach, a complex over-parameterized network is first trained, then pruned based on come criterions, and finally fine-tuned to achieve comparable performance with reduced parameters.

Source: Ensemble Knowledge Distillation for Learning Improved and Efficient Networks

Papers

Showing 276300 of 534 papers

TitleStatusHype
Learning Compact Neural Networks with Regularization0
Can We Find Strong Lottery Tickets in Generative Models?0
Learning Efficient Image Super-Resolution Networks via Structure-Regularized Pruning0
LearningGroup: A Real-Time Sparse Training on FPGA via Learnable Weight Grouping for Multi-Agent Reinforcement Learning0
Adaptive Consensus: A network pruning approach for decentralized optimization0
Learning Pruned Structure and Weights Simultaneously from Scratch: an Attention based Approach0
Structured Pruning Meets Orthogonality0
Learning Sparse Structured Ensembles with SG-MCMC and Network Pruning0
Win the Lottery Ticket via Fourier Analysis: Frequencies Guided Network Pruning0
Less is More: The Influence of Pruning on the Explainability of CNNs0
Can network pruning benefit deep learning under label noise?0
Linear Mode Connectivity in Sparse Neural Networks0
Lipschitz Constant Meets Condition Number: Learning Robust and Compact Deep Neural Networks0
Structured Pruning of Recurrent Neural Networks through Neuron Selection0
LNPT: Label-free Network Pruning and Training0
Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning0
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy0
C2S2: Cost-aware Channel Sparse Selection for Progressive Network Pruning0
Brain-Inspired Efficient Pruning: Exploiting Criticality in Spiking Neural Networks0
Low-Rank Prune-And-Factorize for Language Model Compression0
When Are Neural Pruning Approximation Bounds Useful?0
Block Pruning for Enhanced Efficiency in Convolutional Neural Networks0
Blending Pruning Criteria for Convolutional Neural Networks0
MARS: Multi-macro Architecture SRAM CIM-Based Accelerator with Co-designed Compressed Neural Networks0
MaskConvNet: Training Efficient ConvNets from Scratch via Budget-constrained Filter Pruning0
Show:102550
← PrevPage 12 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50-2.3 GFLOPsAccuracy78.79Unverified
2ResNet50-1.5 GFLOPsAccuracy78.07Unverified
3ResNet50 2.5 GFLOPSAccuracy78Unverified
4RegX-1.6GAccuracy77.97Unverified
5ResNet50 2.0 GFLOPSAccuracy77.7Unverified
6ResNet50-3G FLOPsAccuracy77.1Unverified
7ResNet50-2G FLOPsAccuracy76.4Unverified
8ResNet50-1G FLOPsAccuracy76.38Unverified
9TAS-pruned ResNet-50Accuracy76.2Unverified
10ResNet50Accuracy75.59Unverified
#ModelMetricClaimedVerifiedStatus
1FeatherTop-1 Accuracy76.93Unverified
2SpartanTop-1 Accuracy76.17Unverified
3ST-3Top-1 Accuracy76.03Unverified
4AC/DCTop-1 Accuracy75.64Unverified
5CSTop-1 Accuracy75.5Unverified
6ProbMaskTop-1 Accuracy74.68Unverified
7STRTop-1 Accuracy74.31Unverified
8DNWTop-1 Accuracy74Unverified
9GMPTop-1 Accuracy73.91Unverified
#ModelMetricClaimedVerifiedStatus
1+U-DML*Inference Time (ms)675.56Unverified
2DenseAccuracy79Unverified
3AC/DCAccuracy78.2Unverified
4Beta-RankAccuracy74.01Unverified
5TAS-pruned ResNet-110Accuracy73.16Unverified
#ModelMetricClaimedVerifiedStatus
1TAS-pruned ResNet-110Accuracy94.33Unverified
2ShuffleNet – QuantisedInference Time (ms)23.15Unverified
3AlexNet – QuantisedInference Time (ms)5.23Unverified
4MobileNet – QuantisedInference Time (ms)4.74Unverified
#ModelMetricClaimedVerifiedStatus
1FFN-ShapleyPrunedAvg #Steps12.05Unverified