SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 631640 of 1356 papers

TitleStatusHype
Robust and Large-Payload DNN Watermarking via Fixed, Distribution-Optimized, WeightsCode0
Design Automation for Fast, Lightweight, and Effective Deep Learning Models: A Survey0
Enhancing Targeted Attack Transferability via Diversified Weight Pruning0
An Algorithm-Hardware Co-Optimized Framework for Accelerating N:M Sparse Transformers0
Safety and Performance, Why not Both? Bi-Objective Optimized Model Compression toward AI Software DeploymentCode0
Triple Sparsification of Graph Convolutional Networks without Sacrificing the Accuracy0
Model Blending for Text Classification0
Quiver neural networks0
Efficient model compression with Random Operation Access Specific Tile (ROAST) hashingCode0
Model Compression for Resource-Constrained Mobile Robots0
Show:102550
← PrevPage 64 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified