SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 591600 of 1356 papers

TitleStatusHype
Design and Prototyping Distributed CNN Inference Acceleration in Edge Computing0
Sparse Probabilistic Circuits via Pruning and GrowingCode1
Learning Low-Rank Representations for Model Compression0
Understanding and Improving Knowledge Distillation for Quantization-Aware Training of Large Transformer EncodersCode0
Is Smaller Always Faster? Tradeoffs in Compressing Self-Supervised Speech TransformersCode0
Edge-MultiAI: Multi-Tenancy of Latency-Sensitive Deep Learning Applications on Edge0
Understanding the Role of Mixup in Knowledge Distillation: An Empirical StudyCode0
XAI-BayesHAR: A novel Framework for Human Activity Recognition with Integrated Uncertainty and Shapely Values0
Model Compression for DNN-based Speaker Verification Using Weight Quantization0
GPTQ: Accurate Post-Training Quantization for Generative Pre-trained TransformersCode7
Show:102550
← PrevPage 60 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified