SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 741750 of 1356 papers

TitleStatusHype
Simplifying Two-Stage Detectors for On-Device Inference in Remote Sensing0
Masked Training of Neural Networks with Partial Gradients0
Small, Accurate, and Fast Vehicle Re-ID on the Edge: the SAFR Approach0
Small Language Models: Architectures, Techniques, Evaluation, Problems and Future Adaptation0
Small Object Detection Based on Modified FSSD and Model Compression0
Smart Environmental Monitoring of Marine Pollution using Edge AI0
SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation0
Smooth Model Compression without Fine-Tuning0
CrAFT: Compression-Aware Fine-Tuning for Efficient Visual Task Adaptation0
Soft Labeling Affects Out-of-Distribution Detection of Deep Neural Networks0
Show:102550
← PrevPage 75 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified