SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 11011150 of 1356 papers

TitleStatusHype
Pangu Light: Weight Re-Initialization for Pruning and Accelerating LLMs0
Parameter Compression of Recurrent Neural Networks and Degradation of Short-term Memory0
Partitioning-Guided K-Means: Extreme Empty Cluster Resolution for Extreme Model Compression0
PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with Pattern-based Weight Pruning0
PCEE-BERT: Accelerating BERT Inference via Patient and Confident Early Exiting0
PC-LoRA: Low-Rank Adaptation for Progressive Model Compression with Knowledge Distillation0
PCNN: Pattern-based Fine-Grained Regular Pruning towards Optimizing CNN Accelerators0
PCONV: The Missing but Desirable Sparsity in DNN Weight Pruning for Real-time Execution on Mobile Devices0
Pea-KD: Parameter-efficient and Accurate Knowledge Distillation on BERT0
Pea-KD: Parameter-efficient and accurate Knowledge Distillation0
Penrose Tiled Low-Rank Compression and Section-Wise Q&A Fine-Tuning: A General Framework for Domain-Specific Large Language Model Adaptation0
Performance Aware Convolutional Neural Network Channel Pruning for Embedded GPUs0
PERMDNN: Efficient Compressed DNN Architecture with Permuted Diagonal Matrices0
Perturbation of Deep Autoencoder Weights for Model Compression and Classification of Tabular Data0
PFGDF: Pruning Filter via Gaussian Distribution Feature for Deep Neural Networks Acceleration0
Pivoting Factorization: A Compact Meta Low-Rank Representation of Sparsity for Efficient Inference in Large Language Models0
Position-Aware Depth Decay Decoding (D^3): Boosting Large Language Model Inference Efficiency0
Post-Training Quantization for Video Matting0
Shedding the Bits: Pushing the Boundaries of Quantization with Minifloats on FPGAs0
Post-Training Weighted Quantization of Neural Networks for Language Models0
PQK: Model Compression via Pruning, Quantization, and Knowledge Distillation0
Practical quantum federated learning and its experimental demonstration0
Precise Box Score: Extract More Information from Datasets to Improve the Performance of Face Detection0
Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data0
Preview-based Category Contrastive Learning for Knowledge Distillation0
InDistill: Information flow-preserving knowledge distillation for model compressionCode0
CASP: Compression of Large Multimodal Models Based on Attention SparsityCode0
Compressing Vision Transformers for Low-Resource Visual LearningCode0
Slicing Mutual Information Generalization Bounds for Neural NetworksCode0
SlimNets: An Exploration of Deep Model Compression and AccelerationCode0
Information-Theoretic Understanding of Population Risk Improvement with Model CompressionCode0
Focused Quantization for Sparse CNNsCode0
Canonical convolutional neural networksCode0
ImPart: Importance-Aware Delta-Sparsification for Improved Model Compression and Merging in LLMsCode0
Model Compression with Adversarial Robustness: A Unified Optimization FrameworkCode0
Visual Domain Adaptation for Monocular Depth Estimation on Resource-Constrained HardwareCode0
PruMUX: Augmenting Data Multiplexing with Model CompressionCode0
A Corrected Expected Improvement Acquisition Function Under Noisy ObservationsCode0
Comb, Prune, Distill: Towards Unified Pruning for Vision Model CompressionCode0
Towards Faster and More Compact Foundation Models for Molecular Property PredictionCode0
Tensorization of neural networks for improved privacy and interpretabilityCode0
Network Pruning via Performance MaximizationCode0
Is Modularity Transferable? A Case Study through the Lens of Knowledge DistillationCode0
Tensorized Embedding Layers for Efficient Model CompressionCode0
APSQ: Additive Partial Sum Quantization with Algorithm-Hardware Co-DesignCode0
Neural Architecture Codesign for Fast Physics ApplicationsCode0
Iterative Filter Pruning for Concatenation-based CNN ArchitecturesCode0
TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLPCode0
JavaScript Convolutional Neural Networks for Keyword Spotting in the Browser: An Experimental AnalysisCode0
Image Classification with CondenseNeXt for ARM-Based Computing PlatformsCode0
Show:102550
← PrevPage 23 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified