SOTAVerified

Neural Network Compression

Papers

Showing 1120 of 193 papers

TitleStatusHype
Efficient and Robust Knowledge Distillation from A Stronger Teacher Based on Correlation Matching0
Language Models as Zero-shot Lossless Gradient Compressors: Towards General Neural Parameter Prior ModelsCode0
Adaptive Error-Bounded Hierarchical Matrices for Efficient Neural Network Compression0
TropNNC: Structured Neural Network Compression Using Tropical Geometry0
Unified Framework for Neural Network Compression via Decomposition and Optimal Rank Selection0
Convolutional Neural Network Compression Based on Low-Rank Decomposition0
Condensed Sample-Guided Model Inversion for Knowledge Distillation0
An Efficient Real-Time Object Detection Framework on Resource-Constricted Hardware Devices via Software and Hardware Co-design0
Tiled Bit Networks: Sub-Bit Neural Network Compression Through Reuse of Learnable Binary Vectors0
The Impact of Quantization and Pruning on Deep Reinforcement Learning Models0
Show:102550
← PrevPage 2 of 20Next →

No leaderboard results yet.