SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 791800 of 1356 papers

TitleStatusHype
TinyM^2Net-V3: Memory-Aware Compressed Multimodal Deep Neural Networks for Sustainable Edge Deployment0
Weight, Block or Unit? Exploring Sparsity Tradeoffs for Speech Enhancement on Tiny Neural Accelerators0
Magic for the Age of Quantized DNNs0
Making deep neural networks work for medical audio: representation, compression and domain adaptation0
Mamba-PTQ: Outlier Channels in Recurrent Large Language Models0
TinyR1-32B-Preview: Boosting Accuracy with Branch-Merge Distillation0
MARS: Multi-macro Architecture SRAM CIM-Based Accelerator with Co-designed Compressed Neural Networks0
An Improving Framework of regularization for Network Compression0
A New Clustering-Based Technique for the Acceleration of Deep Convolutional Networks0
MaskPrune: Mask-based LLM Pruning for Layer-wise Uniform Structures0
Show:102550
← PrevPage 80 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified