SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 751800 of 1356 papers

TitleStatusHype
Sometimes Painful but Certainly Promising: Feasibility and Trade-offs of Language Model Inference at the Edge0
SpaLLM: Unified Compressive Adaptation of Large Language Models with Sketching0
Sparse Deep Learning for Time Series Data: Theory and Applications0
Sparse Unbalanced GAN Training with In-Time Over-Parameterization0
Spatio-Temporal Pruning and Quantization for Low-latency Spiking Neural Networks0
Compressible Spectral Mixture Kernels with Sparse Dependency Structures for Gaussian Processes0
Spectral Pruning: Compressing Deep Neural Networks via Spectral Analysis and its Generalization Error0
Speeding up Convolutional Neural Networks with Low Rank Expansions0
Speeding Up Image Classifiers with Little Companions0
Spirit Distillation: A Model Compression Method with Multi-domain Knowledge Transfer0
Sponge Attacks on Sensing AI: Energy-Latency Vulnerabilities and Defense via Model Pruning0
SS-Auto: A Single-Shot, Automatic Structured Weight Pruning Framework of DNNs with Ultra-High Efficiency0
Stability Based Filter Pruning for Accelerating Deep CNNs0
Effective Model Compression via Stage-wise Pruning0
Statistical Model Compression for Small-Footprint Natural Language Understanding0
Strategic Fusion Optimizes Transformer Compression0
Streamlining Tensor and Network Pruning in PyTorch0
Structured Bayesian Compression for Deep Neural Networks Based on The Turbo-VBI Approach0
Structured Compression by Weight Encryption for Unstructured Pruning and Quantization0
Structured Convolutions for Efficient Neural Network Design0
Structured Model Pruning for Efficient Inference in Computational Pathology0
Structured Multi-Hashing for Model Compression0
Structured Pruning for Multi-Task Deep Neural Networks0
Structured Pruning is All You Need for Pruning CNNs at Initialization0
Structured Pruning Learns Compact and Accurate Models0
SubCharacter Chinese-English Neural Machine Translation with Wubi encoding0
Sub-network Multi-objective Evolutionary Algorithm for Filter Pruning0
Surrogate Lagrangian Relaxation: A Path To Retrain-free Deep Neural Network Pruning0
Survey of Dropout Methods for Deep Neural Networks0
Swallowing the Poison Pills: Insights from Vulnerability Disparity Among LLMs0
SwapNet: Efficient Swapping for DNN Inference on Edge AI Devices Beyond the Memory Budget0
Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation0
Swing Distillation: A Privacy-Preserving Knowledge Distillation Framework0
SWITCH: Studying with Teacher for Knowledge Distillation of Large Language Models0
SWSC: Shared Weight for Similar Channel in LLM0
Synergistic Effects of Knowledge Distillation and Structured Pruning for Self-Supervised Speech Models0
Introducing Pose Consistency and Warp-Alignment for Self-Supervised 6D Object Pose Estimation in Color Images0
TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models0
TaQ-DiT: Time-aware Quantization for Diffusion Transformers0
Task-Agnostic and Adaptive-Size BERT Compression0
Task-Agnostic Structured Pruning of Speech Representation Models0
Diffusion Model Compression for Image-to-Image Translation0
Temporal Action Detection Model Compression by Progressive Block Drop0
Tensor Contraction Layers for Parsimonious Deep Nets0
TensorGPT: Efficient Compression of Large Language Models based on Tensor-Train Decomposition0
Tensorial Neural Networks: Generalization of Neural Networks and Application to Model Compression0
Tensorization is a powerful but underexplored tool for compression and interpretability of neural networks0
Test-Time Adaptation Toward Personalized Speech Enhancement: Zero-Shot Learning with Knowledge Distillation0
Tetra-AML: Automatic Machine Learning via Tensor Networks0
TextPruner: A Model Pruning Toolkit for Pre-Trained Language Models0
Show:102550
← PrevPage 16 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified