SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 451500 of 1356 papers

TitleStatusHype
Model Adaptation for Time Constrained Embodied Control0
An Empirical Investigation of Matrix Factorization Methods for Pre-trained Transformers0
Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead0
Knowledge Distillation in Federated Learning: a Survey on Long Lasting Challenges and New Solutions0
Implicit Neural Representation for Videos Based on Residual Connection0
EncCluster: Scalable Functional Encryption in Federated Learning through Weight Clustering and Probabilistic Filters0
PC-LoRA: Low-Rank Adaptation for Progressive Model Compression with Knowledge Distillation0
MobileAIBench: Benchmarking LLMs and LMMs for On-Device Use Cases0
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications0
On the social bias of speech self-supervised models0
Slicing Mutual Information Generalization Bounds for Neural NetworksCode0
Enhancing In-Context Learning Performance with just SVD-Based Weight Pruning: A Theoretical PerspectiveCode0
Reweighted Solutions for Weighted Low Rank Approximation0
Towards Efficient Deep Spiking Neural Networks Construction with Spiking Activity based Pruning0
Robust Knowledge Distillation Based on Feature Variance Against Backdoored Teacher ModelCode0
Effective Interplay between Sparsity and Quantization: From Theory to Practice0
LCQ: Low-Rank Codebook based Quantization for Large Language Models0
Dual sparse training framework: inducing activation map sparsity via Transformed 1 regularization0
Occam Gradient DescentCode0
subMFL: Compatiple subModel Generation for Federated Learning in Device Heterogenous EnvironmentCode0
NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models0
Efficient Model Compression for Hierarchical Federated Learning0
ExtremeMETA: High-speed Lightweight Image Segmentation Model by Remodeling Multi-channel Metamaterial Imagers0
Efficiency optimization of large-scale language models based on deep learning in natural language processing tasks0
TinyM^2Net-V3: Memory-Aware Compressed Multimodal Deep Neural Networks for Sustainable Edge Deployment0
Densely Distilling Cumulative Knowledge for Continual Learning0
AdaKD: Dynamic Knowledge Distillation of ASR models using Adaptive Loss Weighting0
Characterizing the Accuracy -- Efficiency Trade-off of Low-rank Decomposition in Language Models0
NurtureNet: A Multi-task Video-based Approach for Newborn Anthropometry0
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural Networks0
Light Field Compression Based on Implicit Neural Representation0
Communication-Efficient Federated Learning with Adaptive Compression under Dynamic Bandwidth0
Trio-ViT: Post-Training Quantization and Acceleration for Softmax-Free Efficient Vision TransformerCode0
Iterative Filter Pruning for Concatenation-based CNN ArchitecturesCode0
Dependency-Aware Semi-Structured Sparsity of GLU Variants in Large Language Models0
FedGreen: Carbon-aware Federated Learning with Model Size Adaptation0
Rapid Deployment of DNNs for Edge Computing via Structured Pruning at Initialization0
Data-free Knowledge Distillation for Fine-grained Visual CategorizationCode0
Understanding the Performance Horizon of the Latest ML Workloads with NonGEMM Workloads0
Comprehensive Survey of Model Compression and Speed up for Vision Transformers0
Structured Model Pruning for Efficient Inference in Computational Pathology0
Simplifying Two-Stage Detectors for On-Device Inference in Remote Sensing0
Bayesian Federated Model Compression for Communication and Computation Efficiency0
Multilingual Brain Surgeon: Large Language Models Can be Compressed Leaving No Language BehindCode0
Improve Knowledge Distillation via Label Revision and Data Selection0
Knowledge Distillation with Multi-granularity Mixture of Priors for Image Super-Resolution0
Automated Inference of Graph Transformation Rules0
On Linearizing Structured Data in Encoder-Decoder Language Models: Insights from Text-to-SQL0
Enhancing Inference Efficiency of Large Language Models: Investigating Optimization Strategies and Architectural Innovations0
Instance-Aware Group Quantization for Vision Transformers0
Show:102550
← PrevPage 10 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified