SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 11311140 of 1356 papers

TitleStatusHype
Structured Multi-Hashing for Model Compression0
A SOT-MRAM-based Processing-In-Memory Engine for Highly Compressed DNN Implementation0
Graph Pruning for Model Compression0
Few Shot Network Compression via Cross DistillationCode0
On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep LearningCode0
DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks0
Distributed Low Precision Training Without Mixed Precision0
ASCAI: Adaptive Sampling for acquiring Compact AI0
Data Efficient Stagewise Knowledge DistillationCode0
Learning from a Teacher using Unlabeled DataCode1
Show:102550
← PrevPage 114 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified