SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 301350 of 1356 papers

TitleStatusHype
Knowledge Distillation with Reptile Meta-Learning for Pretrained Language Model CompressionCode0
AutoMC: Automated Model Compression based on Domain Knowledge and Progressive search strategyCode0
Knowledge Grafting of Large Language ModelsCode0
Knowledge Distillation for End-to-End Person SearchCode0
Knowledge Distillation for Singing Voice DetectionCode0
Knowledge Distillation as Semiparametric InferenceCode0
Learning Intrinsic Sparse Structures within Long Short-Term MemoryCode0
Iterative Filter Pruning for Concatenation-based CNN ArchitecturesCode0
Information-Theoretic Understanding of Population Risk Improvement with Model CompressionCode0
Is Modularity Transferable? A Case Study through the Lens of Knowledge DistillationCode0
A Corrected Expected Improvement Acquisition Function Under Noisy ObservationsCode0
InDistill: Information flow-preserving knowledge distillation for model compressionCode0
I3D: Transformer architectures with input-dependent dynamic depth for speech recognitionCode0
MiniDisc: Minimal Distillation Schedule for Language Model CompressionCode0
Image Classification with CondenseNeXt for ARM-Based Computing PlatformsCode0
Data-Free Backbone Fine-Tuning for Pruned Neural NetworksCode0
Hybrid Binary Networks: Optimizing for Accuracy, Efficiency and MemoryCode0
Data-Free Adversarial DistillationCode0
HTR-JAND: Handwritten Text Recognition with Joint Attention Network and Knowledge DistillationCode0
ImPart: Importance-Aware Delta-Sparsification for Improved Model Compression and Merging in LLMsCode0
Bayesian Optimization with Clustering and Rollback for CNN Auto PruningCode0
GSB: Group Superposition Binarization for Vision Transformer with Limited Training SamplesCode0
A Contrastive Knowledge Transfer Framework for Model Compression and Transfer LearningCode0
Data-free Knowledge Distillation for Fine-grained Visual CategorizationCode0
Data-Free Knowledge Distillation for Image Super-ResolutionCode0
Comb, Prune, Distill: Towards Unified Pruning for Vision Model CompressionCode0
High-fidelity 3D Model Compression based on Key SpheresCode0
Gradual Channel Pruning while Training using Feature Relevance Scores for Convolutional Neural NetworksCode0
Group channel pruning and spatial attention distilling for object detectionCode0
A Tunable Robust Pruning Framework Through Dynamic Network Rewiring of DNNsCode0
Cross-lingual Distillation for Text ClassificationCode0
Attribution-guided Pruning for Compression, Circuit Discovery, and Targeted Correction in LLMsCode0
Foundations of Large Language Model Compression -- Part 1: Weight QuantizationCode0
JavaScript Convolutional Neural Networks for Keyword Spotting in the Browser: An Experimental AnalysisCode0
A Computing Kernel for Network Binarization on PyTorchCode0
From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model CompressionCode0
FLoCoRA: Federated learning compression with low-rank adaptationCode0
DeepCompress-ViT: Rethinking Model Compression to Enhance Efficiency of Vision Transformers at the EdgeCode0
Attacking Compressed Vision TransformersCode0
DeepFont: Identify Your Font from An ImageCode0
A Brief Review of Hypernetworks in Deep LearningCode0
Deep Model Compression Also Helps Models Capture AmbiguityCode0
GASL: Guided Attention for Sparsity Learning in Deep Neural NetworksCode0
Generalizing Teacher Networks for Effective Knowledge Distillation Across Student ArchitecturesCode0
How does topology of neural architectures impact gradient propagation and model performance?Code0
CA-LoRA: Adapting Existing LoRA for Compressed LLMs to Enable Efficient Multi-Tasking on Personal DevicesCode0
Deep Neural Network Compression for Image Classification and Object DetectionCode0
Learning Accurate Performance Predictors for Ultrafast Automated Model CompressionCode0
“Learning-Compression” Algorithms for Neural Net PruningCode0
Few Shot Network Compression via Cross DistillationCode0
Show:102550
← PrevPage 7 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified