SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 401450 of 1356 papers

TitleStatusHype
Asymmetric Masked Distillation for Pre-Training Small Foundation ModelsCode0
GSB: Group Superposition Binarization for Vision Transformer with Limited Training SamplesCode0
Quantized Neural Networks via -1, +1 Encoding Decomposition and AccelerationCode0
Gradual Channel Pruning while Training using Feature Relevance Scores for Convolutional Neural NetworksCode0
Generalizing Teacher Networks for Effective Knowledge Distillation Across Student ArchitecturesCode0
Reinforced Knowledge Distillation for Time Series RegressionCode0
Image Classification with CondenseNeXt for ARM-Based Computing PlatformsCode0
Uncovering the Hidden Cost of Model CompressionCode0
From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model CompressionCode0
A flexible, extensible software framework for model compression based on the LC algorithmCode0
Foundations of Large Language Model Compression -- Part 1: Weight QuantizationCode0
GASL: Guided Attention for Sparsity Learning in Deep Neural NetworksCode0
Computer Vision Model Compression Techniques for Embedded Systems: A SurveyCode0
Robust Model Compression Using Deep HypothesesCode0
FLAT-LLM: Fine-grained Low-rank Activation Space Transformation for Large Language Model CompressionCode0
FLoCoRA: Federated learning compression with low-rank adaptationCode0
FedSynth: Gradient Compression via Synthetic Data in Federated LearningCode0
Lottery Aware Sparsity Hunting: Enabling Federated Learning on Resource-Limited EdgeCode0
Adversarial Robustness vs Model Compression, or Both?Code0
What Do Compressed Deep Neural Networks Forget?Code0
CoDiNet: Path Distribution Modeling with Consistency and Diversity for Dynamic RoutingCode0
Adversarial Robustness vs. Model Compression, or Both?Code0
Few Shot Network Compression via Cross DistillationCode0
Fast DistilBERT on CPUsCode0
Finding Deviated Behaviors of the Compressed DNN Models for Image ClassificationsCode0
Faithful Label-free Knowledge DistillationCode0
CASP: Compression of Large Multimodal Models Based on Attention SparsityCode0
An exploration of the effect of quantisation on energy consumption and inference time of StarCoder2Code0
Causal Explanation of Convolutional Neural NetworksCode0
Model Compression with Adversarial Robustness: A Unified Optimization FrameworkCode0
Compression-aware Continual Learning using Singular Value DecompositionCode0
Explicit-NeRF-QA: A Quality Assessment Database for Explicit NeRF Model CompressionCode0
Exact Backpropagation in Binary Weighted Networks with Group Weight TransformationsCode0
Exploiting Kernel Sparsity and Entropy for Interpretable CNN CompressionCode0
Compressing Vision Transformers for Low-Resource Visual LearningCode0
Is Smaller Always Faster? Tradeoffs in Compressing Self-Supervised Speech TransformersCode0
Characterizing and Understanding the Behavior of Quantized Models for Reliable DeploymentCode0
StrassenNets: Deep Learning with a Multiplication BudgetCode0
Enhancing In-Context Learning Performance with just SVD-Based Weight Pruning: A Theoretical PerspectiveCode0
Adversarial Fine-tuning of Compressed Neural Networks for Joint Improvement of Robustness and EfficiencyCode0
Enhancing Knowledge Distillation of Large Language Models through Efficient Multi-Modal Distribution AlignmentCode0
Focused Quantization for Sparse CNNsCode0
Exploring Gradient Flow Based Saliency for DNN Model CompressionCode0
Empirical Evaluation of Deep Learning Model Compression Techniques on the WaveNet VocoderCode0
ELSA: Exploiting Layer-wise N:M Sparsity for Vision Transformer AccelerationCode0
SymbolNet: Neural Symbolic Regression with Adaptive Dynamic Pruning for CompressionCode0
Einconv: Exploring Unexplored Tensor Network Decompositions for Convolutional Neural NetworksCode0
Improved Knowledge Distillation via Full Kernel Matrix TransferCode0
Compressing Convolutional Neural Networks via Factorized Convolutional FiltersCode0
Efficient model compression with Random Operation Access Specific Tile (ROAST) hashingCode0
Show:102550
← PrevPage 9 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified