SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 231240 of 1356 papers

TitleStatusHype
Knowledge Grafting of Large Language ModelsCode0
Making deep neural networks work for medical audio: representation, compression and domain adaptation0
LatentLLM: Attention-Aware Joint Tensor Compression0
Is Quantum Optimization Ready? An Effort Towards Neural Network Compression using Adiabatic Quantum Computing0
Edge-First Language Model Inference: Models, Metrics, and Tradeoffs0
On Multilingual Encoder Language Model Compression for Low-Resource Languages0
Saten: Sparse Augmented Tensor Networks for Post-Training Compression of Large Language Models0
RanDeS: Randomized Delta Superposition for Multi-Model CompressionCode0
Low-Complexity Inference in Continual Learning via Compressed Knowledge Transfer0
KDH-MLTC: Knowledge Distillation for Healthcare Multi-Label Text Classification0
Show:102550
← PrevPage 24 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified