SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 11511175 of 1356 papers

TitleStatusHype
Model Compression with Two-stage Multi-teacher Knowledge Distillation for Web Question Answering System0
Compacting, Picking and Growing for Unforgetting Continual LearningCode1
Structured Pruning of a BERT-based Question Answering Model0
Model Fusion via Optimal TransportCode0
Structured Pruning of Large Language ModelsCode1
Differentiable Sparsification for Deep Neural Networks0
Deep Neural Network Compression for Image Classification and Object DetectionCode0
How does topology influence gradient propagation and model performance of deep networks with DenseNet-type skip connections?Code0
Distilled Split Deep Neural Networks for Edge-Assisted Real-Time SystemsCode1
Adversarial Robustness vs. Model Compression, or Both?Code0
REQ-YOLO: A Resource-Aware, Efficient Quantization Framework for Object Detection on FPGAs0
Robust Membership Encoding: Inference Attacks and Copyright Protection for Deep Learning0
Global Sparse Momentum SGD for Pruning Very Deep Neural NetworksCode1
Network Pruning for Low-Rank Binary Index0
Decoupling Weight Regularization from Batch Size for Model Compression0
GQ-Net: Training Quantization-Friendly Deep Networks0
Atomic Compression Networks0
Balancing Specialization, Generalization, and Compression for Detection and Tracking0
Extremely Small BERT Models from Mixed-Vocabulary Training0
Class-dependent Compression of Deep Neural NetworksCode0
Differentiable Mask for Pruning Convolutional and Recurrent Networks0
PCONV: The Missing but Desirable Sparsity in DNN Weight Pruning for Real-time Execution on Mobile Devices0
LIT: Learned Intermediate Representation Training for Model CompressionCode0
Knowledge Distillation for End-to-End Person SearchCode0
Tiny but Accurate: A Pruned, Quantized and Optimized Memristor Crossbar Framework for Ultra Efficient DNN Implementation0
Show:102550
← PrevPage 47 of 55Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified