SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 311320 of 1356 papers

TitleStatusHype
Dependency-Aware Semi-Structured Sparsity of GLU Variants in Large Language Models0
Torch2Chip: An End-to-end Customizable Deep Neural Network Compression and Deployment Toolkit for Prototype Hardware Accelerator DesignCode2
FedGreen: Carbon-aware Federated Learning with Model Size Adaptation0
Rapid Deployment of DNNs for Edge Computing via Structured Pruning at Initialization0
Data-free Knowledge Distillation for Fine-grained Visual CategorizationCode0
Understanding the Performance Horizon of the Latest ML Workloads with NonGEMM Workloads0
Comprehensive Survey of Model Compression and Speed up for Vision Transformers0
Structured Model Pruning for Efficient Inference in Computational Pathology0
Simplifying Two-Stage Detectors for On-Device Inference in Remote Sensing0
Transferable and Principled Efficiency for Open-Vocabulary SegmentationCode1
Show:102550
← PrevPage 32 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified