SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 401425 of 1356 papers

TitleStatusHype
GASL: Guided Attention for Sparsity Learning in Deep Neural NetworksCode0
RemoteTrimmer: Adaptive Structural Pruning for Remote Sensing Image ClassificationCode0
Finding Deviated Behaviors of the Compressed DNN Models for Image ClassificationsCode0
Adversarial Robustness vs. Model Compression, or Both?Code0
Lottery Aware Sparsity Hunting: Enabling Federated Learning on Resource-Limited EdgeCode0
Robust and Large-Payload DNN Watermarking via Fixed, Distribution-Optimized, WeightsCode0
Exploring Unexplored Tensor Network Decompositions for Convolutional Neural NetworksCode0
Exploring Gradient Flow Based Saliency for DNN Model CompressionCode0
Faithful Label-free Knowledge DistillationCode0
Safety and Performance, Why Not Both? Bi-Objective Optimized Model Compression against Heterogeneous Attacks Toward AI Software DeploymentCode0
Explicit-NeRF-QA: A Quality Assessment Database for Explicit NeRF Model CompressionCode0
Model Compression with Adversarial Robustness: A Unified Optimization FrameworkCode0
Exploiting Kernel Sparsity and Entropy for Interpretable CNN CompressionCode0
What Do Compressed Deep Neural Networks Forget?Code0
Compression-aware Continual Learning using Singular Value DecompositionCode0
RanDeS: Randomized Delta Superposition for Multi-Model CompressionCode0
Enhancing In-Context Learning Performance with just SVD-Based Weight Pruning: A Theoretical PerspectiveCode0
Enhancing Knowledge Distillation of Large Language Models through Efficient Multi-Modal Distribution AlignmentCode0
Compressing Vision Transformers for Low-Resource Visual LearningCode0
Simple is what you need for efficient and accurate medical image segmentationCode0
Is Smaller Always Faster? Tradeoffs in Compressing Self-Supervised Speech TransformersCode0
Empirical Evaluation of Deep Learning Model Compression Techniques on the WaveNet VocoderCode0
Adversarial Fine-tuning of Compressed Neural Networks for Joint Improvement of Robustness and EfficiencyCode0
Exact Backpropagation in Binary Weighted Networks with Group Weight TransformationsCode0
Efficient Speech Translation through Model Compression and Knowledge DistillationCode0
Show:102550
← PrevPage 17 of 55Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified