SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 351360 of 1356 papers

TitleStatusHype
LLM Inference Unveiled: Survey and Roofline Model InsightsCode4
Model Compression Method for S4 with Diagonal State Space Layers using Balanced Truncation0
FinGPT-HPC: Efficient Pretraining and Finetuning Large Language Models for Financial Applications with High-Performance Computing0
From Cloud to Edge: Rethinking Generative AI for Low-Resource Design Challenges0
PromptKD: Distilling Student-Friendly Knowledge for Generative Language Models via Prompt TuningCode1
A Survey on Knowledge Distillation of Large Language ModelsCode5
Towards a tailored mixed-precision sub-8-bit quantization scheme for Gated Recurrent Units using Genetic Algorithms0
Extraction of nonlinearity in neural networks with Koopman operator0
Fast Vocabulary Transfer for Language Model CompressionCode1
Model Compression and Efficient Inference for Large Language Models: A Survey0
Show:102550
← PrevPage 36 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified