SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 276300 of 1356 papers

TitleStatusHype
A "Network Pruning Network" Approach to Deep Model Compression0
An Empirical Study of Low Precision Quantization for TinyML0
Can Model Compression Improve NLP Fairness0
Heterogeneous Federated Learning using Dynamic Model Pruning and Adaptive Gradient0
2-bit Model Compression of Deep Convolutional Neural Network on ASIC Engine for Image Retrieval0
Deep Compression of Neural Networks for Fault Detection on Tennessee Eastman Chemical Processes0
Deep Model Compression Via Two-Stage Deep Reinforcement Learning0
Can collaborative learning be private, robust and scalable?0
CAIT: Triple-Win Compression towards High Accuracy, Fast Inference, and Favorable Transferability For ViTs0
Multihop: Leveraging Complex Models to Learn Accurate Simple Models0
Bringing AI To Edge: From Deep Learning's Perspective0
An Empirical Investigation of Matrix Factorization Methods for Pre-trained Transformers0
Adapting Models to Signal Degradation using Distillation0
BRIEDGE: EEG-Adaptive Edge AI for Multi-Brain to Multi-Robot Interaction0
Bridging the Resource Gap: Deploying Advanced Imitation Learning Models onto Affordable Embedded Platforms0
A Multi-objective Complex Network Pruning Framework Based on Divide-and-conquer and Global Performance Impairment Ranking0
Bridging the Gap Between Foundation Models and Heterogeneous Federated Learning0
An Embedded Deep Learning Object Detection Model For Traffic In Asian Countries0
AdapMTL: Adaptive Pruning Framework for Multitask Learning Model0
Accelerating Deep Learning with Dynamic Data Pruning0
Debiased Distillation by Transplanting the Last Layer0
Boosting Graph Neural Networks via Adaptive Knowledge Distillation0
Block-wise Intermediate Representation Training for Model Compression0
An Efficient Sparse Inference Software Accelerator for Transformer-based Language Models on CPUs0
Block Skim Transformer for Efficient Question Answering0
Show:102550
← PrevPage 12 of 55Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified