SOTAVerified

Neural Network Compression

Papers

Showing 5160 of 193 papers

TitleStatusHype
Language Models as Zero-shot Lossless Gradient Compressors: Towards General Neural Parameter Prior ModelsCode0
DeepSZ: A Novel Framework to Compress Deep Neural Networks by Using Error-Bounded Lossy CompressionCode0
COP: Customized Deep Model Compression via Regularized Correlation-Based Filter-Level PruningCode0
Differentiable Fine-grained Quantization for Deep Neural Network CompressionCode0
Improving Neural Network Quantization without Retraining using Outlier Channel SplittingCode0
Dirichlet Pruning for Neural Network CompressionCode0
Forward and Backward Information Retention for Accurate Binary Neural NetworksCode0
Exact Backpropagation in Binary Weighted Networks with Group Weight TransformationsCode0
Additive Tree-Structured Conditional Parameter Spaces in Bayesian Optimization: A Novel Covariance Function and a Fast ImplementationCode0
Heavy Tails in SGD and Compressibility of Overparametrized Neural NetworksCode0
Show:102550
← PrevPage 6 of 20Next →

No leaderboard results yet.