SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 161170 of 1356 papers

TitleStatusHype
Individual Content and Motion Dynamics Preserved Pruning for Video Diffusion Models0
Faithful Label-free Knowledge DistillationCode0
Efficient Pruning of Text-to-Image Models: Insights from Pruning Stable Diffusion0
TaQ-DiT: Time-aware Quantization for Diffusion Transformers0
FASTNav: Fine-tuned Adaptive Small-language-models Trained for Multi-point Robot Navigation0
What Makes a Good Dataset for Knowledge Distillation?0
Puppet-CNN: Input-Adaptive Convolutional Neural Networks with Model Compression using Ordinary Differential Equation0
Bridging the Resource Gap: Deploying Advanced Imitation Learning Models onto Affordable Embedded Platforms0
An exploration of the effect of quantisation on energy consumption and inference time of StarCoder2Code0
Re-Parameterization of Lightweight Transformer for On-Device Speech Emotion Recognition0
Show:102550
← PrevPage 17 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified