SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 876900 of 1356 papers

TitleStatusHype
Clustered Sampling: Low-Variance and Improved Representativity for Clients Selection in Federated LearningCode1
3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low BitwidthQuantization, and Ultra-Low Latency Acceleration0
Test-Time Adaptation Toward Personalized Speech Enhancement: Zero-Shot Learning with Knowledge Distillation0
Neural 3D Scene Compression via Model Compression0
Encoding Weights of Irregular Sparsity for Fixed-to-Fixed Model Compression0
Modulating Regularization Frequency for Efficient Compression-Aware Model Training0
Initialization and Regularization of Factorized Neural LayersCode1
Knowledge Distillation for Swedish NER models: A Search for Performance and Efficiency0
On the Adversarial Robustness of Quantized Neural Networks0
Stealthy Backdoors as Compression ArtifactsCode0
Spirit Distillation: A Model Compression Method with Multi-domain Knowledge Transfer0
Spatio-Temporal Pruning and Quantization for Low-latency Spiking Neural Networks0
Skip-Convolutions for Efficient Video ProcessingCode1
Knowledge Distillation as Semiparametric InferenceCode0
Differentiable Model Compression via Pseudo Quantization NoiseCode1
Compact CNN Structure Learning by Knowledge Distillation0
Augmenting Deep Classifiers with Polynomial Neural NetworksCode0
Annealing Knowledge DistillationCode0
Dual Discriminator Adversarial Distillation for Data-free Model Compression0
Reversible Watermarking in Deep Convolutional Neural Networks for Integrity Authentication0
Efficient Personalized Speech Enhancement through Self-Supervised Learning0
Model Compression for Dynamic Forecast CombinationCode0
Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation0
Deep Compression for PyTorch Model Deployment on MicrocontrollersCode1
Shrinking Bigfoot: Reducing wav2vec 2.0 footprint0
Show:102550
← PrevPage 36 of 55Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified