SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 951975 of 1356 papers

TitleStatusHype
Leveraging Filter Correlations for Deep Model Compression0
Light Field Compression Based on Implicit Neural Representation0
Lightweight Convolutional Representations for On-Device Natural Language Processing0
Lightweight Design and Optimization methods for DCNNs: Progress and Futures0
LINR-PCGC: Lossless Implicit Neural Representations for Point Cloud Geometry Compression0
Lipschitz Continuity Guided Knowledge Distillation0
LIT: Block-wise Intermediate Representation Training for Model Compression0
Large Language Model Compression with Neural Architecture Search0
Locality-Sensitive Hashing for f-Divergences: Mutual Information Loss and Beyond0
Localization-aware Channel Pruning for Object Detection0
LoCa: Logit Calibration for Knowledge Distillation0
Local-Selective Feature Distillation for Single Image Super-Resolution0
LORTSAR: Low-Rank Transformer for Skeleton-based Action Recognition0
LoSparse: Structured Compression of Large Language Models based on Low-Rank and Sparse Approximation0
Lossless Model Compression via Joint Low-Rank Factorization Optimization0
Lottery Hypothesis based Unsupervised Pre-training for Model Compression in Federated Learning0
Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not?0
Low-Complexity Acoustic Scene Classification Using Data Augmentation and Lightweight ResNet0
Low-Complexity Inference in Continual Learning via Compressed Knowledge Transfer0
Low-Rank Compression for IMC Arrays0
Low-Rank Correction for Quantized LLMs0
Low-Rank Matrix Approximation for Neural Network Compression0
Low Rank Optimization for Efficient Deep Learning: Making A Balance between Compact Architecture and Fast Training0
Low-Rank Prune-And-Factorize for Language Model Compression0
Low-rank Tensor Decomposition for Compression of Convolutional Neural Networks Using Funnel Regularization0
Show:102550
← PrevPage 39 of 55Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified