SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 251275 of 1356 papers

TitleStatusHype
Compressed models are NOT miniature versions of large models0
Mamba-PTQ: Outlier Channels in Recurrent Large Language Models0
Minimizing PLM-Based Few-Shot Intent DetectorsCode0
Inference Optimization of Foundation Models on AI Accelerators0
Explicit-NeRF-QA: A Quality Assessment Database for Explicit NeRF Model CompressionCode0
Composable Interventions for Language ModelsCode1
Quantizing YOLOv7: A Comprehensive Study0
Beyond Perplexity: Multi-dimensional Safety Evaluation of LLM CompressionCode0
AMD: Automatic Multi-step Distillation of Large-scale Vision Models0
The Impact of Quantization and Pruning on Deep Reinforcement Learning Models0
MLKD-BERT: Multi-level Knowledge Distillation for Pre-trained Language Models0
Efficient DNN-Powered Software with Fair Sparse Models0
FoldGPT: Simple and Effective Large Language Model Compression Scheme0
MCNC: Manifold Constrained Network Compression0
Q-DiT: Accurate Post-Training Quantization for Diffusion TransformersCode2
LiteYOLO-ID: A Lightweight Object Detection Network for Insulator Defect DetectionCode1
Exploring compressibility of transformer based text-to-music (TTM) models0
Speeding Up Image Classifiers with Little Companions0
Pruning via Merging: Compressing LLMs via Manifold Alignment Based Layer MergingCode1
Reinforced Knowledge Distillation for Time Series RegressionCode0
MoA: Mixture of Sparse Attention for Automatic Large Language Model CompressionCode2
FLoCoRA: Federated learning compression with low-rank adaptationCode0
Failure-Resilient Distributed Inference with Model Compression over Heterogeneous Edge Devices0
SDQ: Sparse Decomposed Quantization for LLM Inference0
Finding Task-specific Subnetworks in Multi-task Spoken Language Understanding Model0
Show:102550
← PrevPage 11 of 55Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified