SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 431440 of 1356 papers

TitleStatusHype
Empirical Evaluation of Deep Learning Model Compression Techniques on the WaveNet VocoderCode0
Efficient model compression with Random Operation Access Specific Tile (ROAST) hashingCode0
Improved Knowledge Distillation via Full Kernel Matrix TransferCode0
Focused Quantization for Sparse CNNsCode0
Compressing Convolutional Neural Networks via Factorized Convolutional FiltersCode0
Exploiting Kernel Sparsity and Entropy for Interpretable CNN CompressionCode0
Characterizing and Understanding the Behavior of Quantized Models for Reliable DeploymentCode0
Systematic Outliers in Large Language ModelsCode0
Annealing Knowledge DistillationCode0
Generalizing Teacher Networks for Effective Knowledge Distillation Across Student ArchitecturesCode0
Show:102550
← PrevPage 44 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified