SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 801810 of 1356 papers

TitleStatusHype
The Effect of Model Compression on Fairness in Facial Expression Recognition0
The Impact of Quantization and Pruning on Deep Reinforcement Learning Models0
The Knowledge Within: Methods for Data-Free Model Compression0
The Lottery LLM Hypothesis, Rethinking What Abilities Should LLM Compression Preserve?0
Theoretical Guarantees for Low-Rank Compression of Deep Neural Networks0
The Potential of AutoML for Recommender Systems0
Three Dimensional Convolutional Neural Network Pruning with Regularization-Based Method0
Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation0
Time-Correlated Sparsification for Efficient Over-the-Air Model Aggregation in Wireless Federated Learning0
Tiny but Accurate: A Pruned, Quantized and Optimized Memristor Crossbar Framework for Ultra Efficient DNN Implementation0
Show:102550
← PrevPage 81 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified