SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 626650 of 1356 papers

TitleStatusHype
Benchmarking Adversarial Robustness of Compressed Deep Learning Models0
An Algorithm-Hardware Co-Optimized Framework for Accelerating N:M Sparse Transformers0
ACAM-KD: Adaptive and Cooperative Attention Masking for Knowledge Distillation0
Differentiable Mask for Pruning Convolutional and Recurrent Networks0
BD-KD: Balancing the Divergences for Online Knowledge Distillation0
Differentiable Feature Aggregation Search for Knowledge Distillation0
Differentiable Architecture Compression0
An Efficient Real-Time Object Detection Framework on Resource-Constricted Hardware Devices via Software and Hardware Co-design0
Developing Far-Field Speaker System Via Teacher-Student Learning0
Design Automation for Fast, Lightweight, and Effective Deep Learning Models: A Survey0
Bayesian Federated Model Compression for Communication and Computation Efficiency0
Design and Prototyping Distributed CNN Inference Acceleration in Edge Computing0
Bayesian Deep Learning Via Expectation Maximization and Turbo Deep Approximate Message Passing0
A Model Compression Method with Matrix Product Operators for Speech Enhancement0
Activation Sparsity Opportunities for Compressing General Large Language Models0
Deploying Foundation Model Powered Agent Services: A Survey0
Dependency-Aware Semi-Structured Sparsity of GLU Variants in Large Language Models0
Dense Vision Transformer Compression with Few Samples0
A Mixed Integer Programming Approach for Verifying Properties of Binarized Neural Networks0
Densely Distilling Cumulative Knowledge for Continual Learning0
Delving Deep into Semantic Relation Distillation0
Balancing Specialization, Generalization, and Compression for Detection and Tracking0
DeGAN : Data-Enriching GAN for Retrieving Representative Samples from a Trained Classifier0
DeepTwist: Learning Model Compression via Occasional Weight Distortion0
Balancing Cost and Benefit with Tied-Multi Transformers0
Show:102550
← PrevPage 26 of 55Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified