SOTAVerified

Neural Network Compression

Papers

Showing 171180 of 193 papers

TitleStatusHype
A Closer Look at Structured Pruning for Neural Network CompressionCode0
DeepSZ: A Novel Framework to Compress Deep Neural Networks by Using Error-Bounded Lossy CompressionCode0
COP: Customized Deep Model Compression via Regularized Correlation-Based Filter-Level PruningCode0
Magnitude and Similarity based Variable Rate Filter Pruning for Efficient Convolution Neural NetworksCode0
Causal-DFQ: Causality Guided Data-free Network QuantizationCode0
Minimal Random Code Learning: Getting Bits Back from Compressed Model ParametersCode0
Neural Network Compression of ACAS Xu Early Prototype is Unsafe: Closed-Loop Verification through Quantized State BackreachabilityCode0
Teacher-Class Network: A Neural Network Compression MechanismCode0
Characterising Across-Stack Optimisations for Deep Convolutional Neural NetworksCode0
Deep Neural Network Compression with Single and Multiple Level QuantizationCode0
Show:102550
← PrevPage 18 of 20Next →

No leaderboard results yet.