SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 676700 of 1356 papers

TitleStatusHype
QAPPA: Quantization-Aware Power, Performance, and Area Modeling of DNN Accelerators0
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a Survey0
Perturbation of Deep Autoencoder Weights for Model Compression and Classification of Tabular Data0
Chemical transformer compression for accelerating both training and inference of molecular modelingCode0
DNA data storage, sequencing data-carrying DNA0
Serving and Optimizing Machine Learning Workflows on Heterogeneous Infrastructures0
Data-Free Adversarial Knowledge Distillation for Graph Neural Networks0
Automatic Block-wise Pruning with Auxiliary Gating Structures for Deep Convolutional Neural Networks0
Online Model Compression for Federated Learning with Large Models0
Can collaborative learning be private, robust and scalable?0
Multi-Granularity Structural Knowledge Distillation for Language Model CompressionCode0
Towards Feature Distribution Alignment and Diversity Enhancement for Data-Free Quantization0
Leaner and Faster: Two-Stage Model Compression for Lightweight Text-Image RetrievalCode1
Enable Deep Learning on Mobile Devices: Methods, Systems, and Applications0
Neural Network Pruning by Cooperative Coevolution0
Characterizing and Understanding the Behavior of Quantized Models for Reliable DeploymentCode0
Enabling All In-Edge Deep Learning: A Literature Review0
LilNetX: Lightweight Networks with EXtreme Model Compression and Structured SparsificationCode0
Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse NetworkCode1
FedSynth: Gradient Compression via Synthetic Data in Federated Learning0
Aligned Weight Regularizers for Pruning Pretrained Neural Networks0
Structured Pruning Learns Compact and Accurate ModelsCode1
TextPruner: A Model Pruning Toolkit for Pre-Trained Language Models0
Kernel Modulation: A Parameter-Efficient Method for Training Convolutional Neural Networks0
CHEX: CHannel EXploration for CNN Model CompressionCode1
Show:102550
← PrevPage 28 of 55Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified