SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 571580 of 1356 papers

TitleStatusHype
HCE: Improving Performance and Efficiency with Heterogeneously Compressed Neural Network Ensemble0
FSCNN: A Fast Sparse Convolution Neural Network Inference System0
Frustratingly Easy Model Ensemble for Abstractive Summarization0
From Word Vectors to Multimodal Embeddings: Techniques, Applications, and Future Directions For Large Language Models0
From Large to Super-Tiny: End-to-End Optimization for Cost-Efficient LLMs0
Full-Cycle Energy Consumption Benchmark for Low-Carbon Computer Vision0
Conditional Teacher-Student Learning0
Fundamental Limits of Communication Efficiency for Model Aggregation in Distributed Learning: A Rate-Distortion Approach0
Conditional Generative Data-free Knowledge Distillation0
From Cloud to Edge: Rethinking Generative AI for Low-Resource Design Challenges0
Show:102550
← PrevPage 58 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified