SOTAVerified

Neural Network Compression

Papers

Showing 7180 of 193 papers

TitleStatusHype
Adaptive Distillation: Aggregating Knowledge from Multiple Paths for Efficient DistillationCode0
Improving Neural Network Quantization without Retraining using Outlier Channel SplittingCode0
Teacher-Class Network: A Neural Network Compression MechanismCode0
Minimal Random Code Learning: Getting Bits Back from Compressed Model ParametersCode0
DP-Net: Dynamic Programming Guided Deep Neural Network Compression0
DKM: Differentiable K-Means Clustering Layer for Neural Network Compression0
Distilling Pixel-Wise Feature Similarities for Semantic Segmentation0
An Overview of Neural Network Compression0
Distilling Critical Paths in Convolutional Neural Networks0
Differentiable Joint Pruning and Quantization for Hardware Efficiency0
Show:102550
← PrevPage 8 of 20Next →

No leaderboard results yet.