SOTAVerified

Neural Network Compression

Papers

Showing 1120 of 193 papers

TitleStatusHype
Prune Your Model Before Distill ItCode1
Quantisation and Pruning for Neural Network Compression and RegularisationCode1
Learning Filter Basis for Convolutional Neural Network CompressionCode1
Robustness and Transferability of Universal Attacks on Compressed ModelsCode1
FAT: Learning Low-Bitwidth Parametric Representation via Frequency-Aware TransformationCode1
Few-Bit Backward: Quantized Gradients of Activation Functions for Memory Footprint ReductionCode1
Distilled Split Deep Neural Networks for Edge-Assisted Real-Time SystemsCode1
Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and BetterCode1
Head Network Distillation: Splitting Distilled Deep Neural Networks for Resource-Constrained Edge Computing SystemsCode1
PD-Quant: Post-Training Quantization based on Prediction Difference MetricCode1
Show:102550
← PrevPage 2 of 20Next →

No leaderboard results yet.