SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 101110 of 1356 papers

TitleStatusHype
Synergistic Effects of Knowledge Distillation and Structured Pruning for Self-Supervised Speech Models0
Theoretical Guarantees for Low-Rank Compression of Deep Neural Networks0
Activation-Informed Merging of Large Language ModelsCode1
Accelerating Linear Recurrent Neural Networks for the Edge with Unstructured Sparsity0
MIND: Modality-Informed Knowledge Distillation Framework for Multimodal Clinical Prediction Tasks0
Attention Sinks and Outlier Features: A 'Catch, Tag, and Release' Mechanism for Embeddings0
Role of Mixup in Topological Persistence Based Knowledge Distillation for Wearable Sensor Data0
Huff-LLM: End-to-End Lossless Compression for Efficient LLM Inference0
Efficient Supernet Training with Orthogonal Softmax for Scalable ASR Model Compression0
Pivoting Factorization: A Compact Meta Low-Rank Representation of Sparsity for Efficient Inference in Large Language Models0
Show:102550
← PrevPage 11 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified