SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 661670 of 1356 papers

TitleStatusHype
Guaranteed Quantization Error Computation for Neural Network Model Compression0
Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures0
Deep Collective Knowledge Distillation0
Learning Accurate Performance Predictors for Ultrafast Automated Model CompressionCode0
Structured Pruning for Multi-Task Deep Neural Networks0
Surrogate Lagrangian Relaxation: A Path To Retrain-free Deep Neural Network Pruning0
oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes0
A Multi-objective Complex Network Pruning Framework Based on Divide-and-conquer and Global Performance Impairment Ranking0
Tetra-AML: Automatic Machine Learning via Tensor Networks0
Information-Theoretic GAN Compression with Variational Energy-based Model0
Show:102550
← PrevPage 67 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified