SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 811820 of 1356 papers

TitleStatusHype
Memory-Friendly Scalable Super-Resolution via Rewinding Lottery Ticket Hypothesis0
An Empirical Study of Low Precision Quantization for TinyML0
Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains0
An Empirical Investigation of Matrix Factorization Methods for Pre-trained Transformers0
MICIK: MIning Cross-Layer Inherent Similarity Knowledge for Deep Model Compression0
To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference0
A Multi-objective Complex Network Pruning Framework Based on Divide-and-conquer and Global Performance Impairment Ranking0
MIMONet: Multi-Input Multi-Output On-Device Deep Learning0
MIND: Modality-Informed Knowledge Distillation Framework for Multimodal Clinical Prediction Tasks0
Minimally Invasive Surgery for Sparse Neural Networks in Contrastive Manner0
Show:102550
← PrevPage 82 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified