SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 381390 of 1356 papers

TitleStatusHype
Finding Deviated Behaviors of the Compressed DNN Models for Image ClassificationsCode0
Computer Vision Model Compression Techniques for Embedded Systems: A SurveyCode0
Exploring Unexplored Tensor Network Decompositions for Convolutional Neural NetworksCode0
Faithful Label-free Knowledge DistillationCode0
Few Shot Network Compression via Cross DistillationCode0
Exact Backpropagation in Binary Weighted Networks with Group Weight TransformationsCode0
Explicit-NeRF-QA: A Quality Assessment Database for Explicit NeRF Model CompressionCode0
Adversarial Robustness vs. Model Compression, or Both?Code0
Exploiting Kernel Sparsity and Entropy for Interpretable CNN CompressionCode0
Enhancing In-Context Learning Performance with just SVD-Based Weight Pruning: A Theoretical PerspectiveCode0
Show:102550
← PrevPage 39 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified