SOTAVerified

Neural Network Compression

Papers

Showing 150 of 193 papers

TitleStatusHype
DepGraph: Towards Any Structural PruningCode4
Data-Free Learning of Student NetworksCode2
Torch2Chip: An End-to-end Customizable Deep Neural Network Compression and Deployment Toolkit for Prototype Hardware Accelerator DesignCode2
Neural Network Compression Framework for fast model inferenceCode2
A Survey on Deep Neural Network Pruning-Taxonomy, Comparison, Analysis, and RecommendationsCode2
Quantisation and Pruning for Neural Network Compression and RegularisationCode1
Distilled Split Deep Neural Networks for Edge-Assisted Real-Time SystemsCode1
CHIP: CHannel Independence-based Pruning for Compact Neural NetworksCode1
Few-Bit Backward: Quantized Gradients of Activation Functions for Memory Footprint ReductionCode1
ZeroQ: A Novel Zero Shot Quantization FrameworkCode1
Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and BetterCode1
SwiftTron: An Efficient Hardware Accelerator for Quantized TransformersCode1
PD-Quant: Post-Training Quantization based on Prediction Difference MetricCode1
Learning Filter Basis for Convolutional Neural Network CompressionCode1
FAT: Learning Low-Bitwidth Parametric Representation via Frequency-Aware TransformationCode1
The continuous categorical: a novel simplex-valued exponential familyCode1
T-Basis: a Compact Representation for Neural NetworksCode1
Neural network compression via learnable wavelet transformsCode1
WoodFisher: Efficient Second-Order Approximation for Neural Network CompressionCode1
Head Network Distillation: Splitting Distilled Deep Neural Networks for Resource-Constrained Edge Computing SystemsCode1
Prune Your Model Before Distill ItCode1
Spectral Tensor Train Parameterization of Deep Learning LayersCode1
REST: Robust and Efficient Neural Networks for Sleep Monitoring in the WildCode1
SPIN: An Empirical Evaluation on Sharing Parameters of Isotropic NetworksCode1
NeRV: Neural Representations for VideosCode1
Towards Meta-Pruning via Optimal TransportCode1
Wavelet Feature Maps Compression for Image-to-Image CNNsCode1
Robustness and Transferability of Universal Attacks on Compressed ModelsCode1
Neural Network Compression of ACAS Xu Early Prototype is Unsafe: Closed-Loop Verification through Quantized State BackreachabilityCode0
Parallel Blockwise Knowledge Distillation for Deep Neural Network CompressionCode0
Characterising Across-Stack Optimisations for Deep Convolutional Neural NetworksCode0
Certified Neural Approximations of Nonlinear DynamicsCode0
MUSCO: Multi-Stage Compression of neural networksCode0
Causal-DFQ: Causality Guided Data-free Network QuantizationCode0
Magnitude and Similarity based Variable Rate Filter Pruning for Efficient Convolution Neural NetworksCode0
DeepCABAC: A Universal Compression Algorithm for Deep Neural NetworksCode0
Learning Sparse Networks Using Targeted DropoutCode0
Minimal Random Code Learning: Getting Bits Back from Compressed Model ParametersCode0
Neural Network Compression Using Higher-Order Statistics and AuxiliaryReconstruction LossesCode0
Joint Matrix Decomposition for Deep Convolutional Neural Networks CompressionCode0
Implicit Compressibility of Overparametrized Neural Networks Trained with Heavy-Tailed SGDCode0
Efficient Neural Network CompressionCode0
Improving Neural Network Quantization without Retraining using Outlier Channel SplittingCode0
Few Sample Knowledge Distillation for Efficient Network CompressionCode0
Forward and Backward Information Retention for Accurate Binary Neural NetworksCode0
COP: Customized Deep Model Compression via Regularized Correlation-Based Filter-Level PruningCode0
Exact Backpropagation in Binary Weighted Networks with Group Weight TransformationsCode0
Deep convolutional neural network compression via coupled tensor decompositionCode0
Automatic Neural Network Compression by Sparsity-Quantization Joint Learning: A Constrained Optimization-based ApproachCode0
Heavy Tails in SGD and Compressibility of Overparametrized Neural NetworksCode0
Show:102550
← PrevPage 1 of 4Next →

No leaderboard results yet.