SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 150 of 1356 papers

TitleStatusHype
SANA 1.5: Efficient Scaling of Training-Time and Inference-Time Compute in Linear Diffusion TransformerCode9
GPTQ: Accurate Post-Training Quantization for Generative Pre-trained TransformersCode7
A Survey on Knowledge Distillation of Large Language ModelsCode5
LLM Inference Unveiled: Survey and Roofline Model InsightsCode4
ZipNN: Lossless Compression for AI ModelsCode3
SVD-LLM: Truncation-aware Singular Value Decomposition for Large Language Model CompressionCode3
ABQ-LLM: Arbitrary-Bit Quantized Inference Acceleration for Large Language ModelsCode3
SVD-LLM V2: Optimizing Singular Value Truncation for Large Language Model CompressionCode3
Efficient Reasoning Models: A SurveyCode3
Compact 3D Gaussian Splatting for Static and Dynamic Radiance FieldsCode3
Data-Free Knowledge Distillation for Deep Neural NetworksCode2
QuEST: Low-bit Diffusion Model Quantization via Efficient Selective FinetuningCode2
LiDAR-PTQ: Post-Training Quantization for Point Cloud 3D Object DetectionCode2
Learning Student Networks in the WildCode2
Compressing Volumetric Radiance Fields to 1 MBCode2
Torch2Chip: An End-to-end Customizable Deep Neural Network Compression and Deployment Toolkit for Prototype Hardware Accelerator DesignCode2
On-Device Domain GeneralizationCode2
PromptMM: Multi-Modal Knowledge Distillation for Recommendation with Prompt-TuningCode2
MaskLLM: Learnable Semi-Structured Sparsity for Large Language ModelsCode2
Q-DiT: Accurate Post-Training Quantization for Diffusion TransformersCode2
Compact 3D Gaussian Representation for Radiance FieldCode2
Well-Read Students Learn Better: On the Importance of Pre-training Compact ModelsCode2
LightGNN: Simple Graph Neural Network for RecommendationCode2
MoA: Mixture of Sparse Attention for Automatic Large Language Model CompressionCode2
Fast convolutional neural networks on FPGAs with hls4mlCode2
Diffusion Models for Image Restoration and Enhancement -- A Comprehensive SurveyCode2
Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New OutlooksCode2
AMC: AutoML for Model Compression and Acceleration on Mobile DevicesCode2
OmniQuant: Omnidirectionally Calibrated Quantization for Large Language ModelsCode2
Towards Lightweight Super-Resolution with Dual Regression LearningCode2
A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution RobustnessCode1
3DG-STFM: 3D Geometric Guided Student-Teacher Feature MatchingCode1
CPrune: Compiler-Informed Model Pruning for Efficient Target-Aware DNN ExecutionCode1
A Unified Pruning Framework for Vision TransformersCode1
Backdoor Attacks on Federated Learning with Lottery Ticket HypothesisCode1
Contrastive Representation DistillationCode1
CrossKD: Cross-Head Knowledge Distillation for Object DetectionCode1
Constraint-aware and Ranking-distilled Token Pruning for Efficient Transformer InferenceCode1
Consistent Quantity-Quality Control across Scenes for Deployment-Aware Gaussian SplattingCode1
Designing Large Foundation Models for Efficient Training and Inference: A SurveyCode1
CompRess: Self-Supervised Learning by Compressing RepresentationsCode1
Compression-Aware Video Super-ResolutionCode1
Computation-Efficient Knowledge Distillation via Uncertainty-Aware MixupCode1
Contrastive Distillation on Intermediate Representations for Language Model CompressionCode1
DarwinLM: Evolutionary Structured Pruning of Large Language ModelsCode1
Compacting, Picking and Growing for Unforgetting Continual LearningCode1
A Survey on Dynamic Neural Networks: from Computer Vision to Multi-modal Sensor FusionCode1
Basic Binary Convolution Unit for Binarized Image Restoration NetworkCode1
Composable Interventions for Language ModelsCode1
An Information Theory-inspired Strategy for Automatic Network PruningCode1
Show:102550
← PrevPage 1 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified