SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 776800 of 1356 papers

TitleStatusHype
LoSparse: Structured Compression of Large Language Models based on Low-Rank and Sparse Approximation0
An Overview of Neural Network Compression0
Lossless Model Compression via Joint Low-Rank Factorization Optimization0
Lottery Hypothesis based Unsupervised Pre-training for Model Compression in Federated Learning0
Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not?0
Low-Complexity Acoustic Scene Classification Using Data Augmentation and Lightweight ResNet0
Low-Complexity Inference in Continual Learning via Compressed Knowledge Transfer0
Low-Rank Compression for IMC Arrays0
Low-Rank Correction for Quantized LLMs0
Low-Rank Matrix Approximation for Neural Network Compression0
Low Rank Optimization for Efficient Deep Learning: Making A Balance between Compact Architecture and Fast Training0
Low-Rank Prune-And-Factorize for Language Model Compression0
Low-rank Tensor Decomposition for Compression of Convolutional Neural Networks Using Funnel Regularization0
LPRNet: Lightweight Deep Network by Low-rank Pointwise Residual Convolution0
A Novel Architecture Slimming Method for Network Pruning and Knowledge Distillation0
TinyM^2Net-V3: Memory-Aware Compressed Multimodal Deep Neural Networks for Sustainable Edge Deployment0
Weight, Block or Unit? Exploring Sparsity Tradeoffs for Speech Enhancement on Tiny Neural Accelerators0
Magic for the Age of Quantized DNNs0
Making deep neural networks work for medical audio: representation, compression and domain adaptation0
Mamba-PTQ: Outlier Channels in Recurrent Large Language Models0
TinyR1-32B-Preview: Boosting Accuracy with Branch-Merge Distillation0
MARS: Multi-macro Architecture SRAM CIM-Based Accelerator with Co-designed Compressed Neural Networks0
An Improving Framework of regularization for Network Compression0
A New Clustering-Based Technique for the Acceleration of Deep Convolutional Networks0
MaskPrune: Mask-based LLM Pruning for Layer-wise Uniform Structures0
Show:102550
← PrevPage 32 of 55Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified