SOTAVerified

Neural Network Compression

Papers

Showing 150 of 193 papers

TitleStatusHype
DepGraph: Towards Any Structural PruningCode4
Torch2Chip: An End-to-end Customizable Deep Neural Network Compression and Deployment Toolkit for Prototype Hardware Accelerator DesignCode2
A Survey on Deep Neural Network Pruning-Taxonomy, Comparison, Analysis, and RecommendationsCode2
Neural Network Compression Framework for fast model inferenceCode2
Data-Free Learning of Student NetworksCode2
Towards Meta-Pruning via Optimal TransportCode1
SwiftTron: An Efficient Hardware Accelerator for Quantized TransformersCode1
PD-Quant: Post-Training Quantization based on Prediction Difference MetricCode1
SPIN: An Empirical Evaluation on Sharing Parameters of Isotropic NetworksCode1
Wavelet Feature Maps Compression for Image-to-Image CNNsCode1
Few-Bit Backward: Quantized Gradients of Activation Functions for Memory Footprint ReductionCode1
CHIP: CHannel Independence-based Pruning for Compact Neural NetworksCode1
NeRV: Neural Representations for VideosCode1
Prune Your Model Before Distill ItCode1
Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and BetterCode1
Spectral Tensor Train Parameterization of Deep Learning LayersCode1
FAT: Learning Low-Bitwidth Parametric Representation via Frequency-Aware TransformationCode1
Robustness and Transferability of Universal Attacks on Compressed ModelsCode1
Head Network Distillation: Splitting Distilled Deep Neural Networks for Resource-Constrained Edge Computing SystemsCode1
T-Basis: a Compact Representation for Neural NetworksCode1
WoodFisher: Efficient Second-Order Approximation for Neural Network CompressionCode1
Neural network compression via learnable wavelet transformsCode1
The continuous categorical: a novel simplex-valued exponential familyCode1
REST: Robust and Efficient Neural Networks for Sleep Monitoring in the WildCode1
Quantisation and Pruning for Neural Network Compression and RegularisationCode1
ZeroQ: A Novel Zero Shot Quantization FrameworkCode1
Distilled Split Deep Neural Networks for Edge-Assisted Real-Time SystemsCode1
Learning Filter Basis for Convolutional Neural Network CompressionCode1
Linearity-based neural network compression0
MUC-G4: Minimal Unsat Core-Guided Incremental Verification for Deep Neural Network Compression0
Is Quantum Optimization Ready? An Effort Towards Neural Network Compression using Adiabatic Quantum Computing0
Certified Neural Approximations of Nonlinear DynamicsCode0
Low-Rank Matrix Approximation for Neural Network Compression0
GranQ: Granular Zero-Shot Quantization with Channel-Wise Activation Scaling in QAT0
Stabilizing Quantization-Aware Training by Implicit-Regularization on Hessian Matrix0
Compression of Site-Specific Deep Neural Networks for Massive MIMO Precoding0
A Novel Structure-Agnostic Multi-Objective Approach for Weight-Sharing Compression in Deep Neural Networks0
What is Left After Distillation? How Knowledge Transfer Impacts Fairness and Bias0
Efficient and Robust Knowledge Distillation from A Stronger Teacher Based on Correlation Matching0
Language Models as Zero-shot Lossless Gradient Compressors: Towards General Neural Parameter Prior ModelsCode0
Adaptive Error-Bounded Hierarchical Matrices for Efficient Neural Network Compression0
TropNNC: Structured Neural Network Compression Using Tropical Geometry0
Unified Framework for Neural Network Compression via Decomposition and Optimal Rank Selection0
Convolutional Neural Network Compression Based on Low-Rank Decomposition0
Condensed Sample-Guided Model Inversion for Knowledge Distillation0
An Efficient Real-Time Object Detection Framework on Resource-Constricted Hardware Devices via Software and Hardware Co-design0
Tiled Bit Networks: Sub-Bit Neural Network Compression Through Reuse of Learnable Binary Vectors0
The Impact of Quantization and Pruning on Deep Reinforcement Learning Models0
Neural Network Compression for Reinforcement Learning Tasks0
Towards Explaining Deep Neural Network Compression Through a Probabilistic Latent Space0
Show:102550
← PrevPage 1 of 4Next →

No leaderboard results yet.