SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 751800 of 1356 papers

TitleStatusHype
Triple Sparsification of Graph Convolutional Networks without Sacrificing the Accuracy0
Model Blending for Text Classification0
Quiver neural networks0
Efficient model compression with Random Operation Access Specific Tile (ROAST) hashingCode0
Model Compression for Resource-Constrained Mobile Robots0
T-RECX: Tiny-Resource Efficient Convolutional neural networks with early-eXit0
Normalized Feature Distillation for Semantic Segmentation0
Rank-Based Filter Pruning for Real-Time UAV Tracking0
Quantum Neural Network Compression0
KroneckerBERT: Significant Compression of Pre-trained Language Models Through Kronecker Decomposition and Knowledge Distillation0
PCEE-BERT: Accelerating BERT Inference via Patient and Confident Early ExitingCode0
Language model compression with weighted low-rank factorization0
QUIDAM: A Framework for Quantization-Aware DNN Accelerator and Model Co-Exploration0
QTI Submission to DCASE 2021: residual normalization for device-imbalanced acoustic scene classification with efficient design0
Fundamental Limits of Communication Efficiency for Model Aggregation in Distributed Learning: A Rate-Distortion Approach0
Representative Teacher Keys for Knowledge Distillation Model Compression Based on Attention Mechanism for Image Classification0
An Automatic and Efficient BERT Pruning for Edge AI Systems0
Knowledge Distillation for Oriented Object Detection on Aerial Images0
Revisiting Self-Distillation0
Accelerating Inference and Language Model Fusion of Recurrent Neural Network Transducers via End-to-End 4-bit Quantization0
Atrial Fibrillation Detection Using Weight-Pruned, Log-Quantised Convolutional Neural Networks0
STD-NET: Search of Image Steganalytic Deep-learning Architecture via Hierarchical Tensor DecompositionCode0
A Theoretical Understanding of Neural Network Compression from Sparse Linear Approximation0
HideNseek: Federated Lottery Ticket via Server-side Pruning and Sign Supermask0
Differentially Private Model Compression0
Canonical convolutional neural networksCode0
Resource Allocation for Compression-aided Federated Learning with High Distortion Rate0
MiniDisc: Minimal Distillation Schedule for Language Model CompressionCode0
Do we need Label Regularization to Fine-tune Pre-trained Language Models?0
Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models0
Aligning Logits Generatively for Principled Black-Box Knowledge DistillationCode0
InDistill: Information flow-preserving knowledge distillation for model compressionCode0
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a Survey0
Perturbation of Deep Autoencoder Weights for Model Compression and Classification of Tabular Data0
QAPPA: Quantization-Aware Power, Performance, and Area Modeling of DNN Accelerators0
Chemical transformer compression for accelerating both training and inference of molecular modelingCode0
DNA data storage, sequencing data-carrying DNA0
Serving and Optimizing Machine Learning Workflows on Heterogeneous Infrastructures0
Data-Free Adversarial Knowledge Distillation for Graph Neural Networks0
Automatic Block-wise Pruning with Auxiliary Gating Structures for Deep Convolutional Neural Networks0
Online Model Compression for Federated Learning with Large Models0
Can collaborative learning be private, robust and scalable?0
Multi-Granularity Structural Knowledge Distillation for Language Model CompressionCode0
Towards Feature Distribution Alignment and Diversity Enhancement for Data-Free Quantization0
Enable Deep Learning on Mobile Devices: Methods, Systems, and Applications0
Neural Network Pruning by Cooperative Coevolution0
Characterizing and Understanding the Behavior of Quantized Models for Reliable DeploymentCode0
Enabling All In-Edge Deep Learning: A Literature Review0
LilNetX: Lightweight Networks with EXtreme Model Compression and Structured SparsificationCode0
Aligned Weight Regularizers for Pruning Pretrained Neural Networks0
Show:102550
← PrevPage 16 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified