SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 901910 of 1356 papers

TitleStatusHype
Analysis of memory consumption by neural networks based on hyperparameters0
Neural Regularized Domain Adaptation for Chinese Word Segmentation0
NeuSemSlice: Towards Effective DNN Model Maintenance via Neuron-level Semantic Slicing0
Noisy Neural Network Compression for Analog Storage Devices0
Understanding the Performance Horizon of the Latest ML Workloads with NonGEMM Workloads0
Non-Structured DNN Weight Pruning -- Is It Beneficial in Any Platform?0
Normalized Feature Distillation for Semantic Segmentation0
Norm Tweaking: High-performance Low-bit Quantization of Large Language Models0
NurtureNet: A Multi-task Video-based Approach for Newborn Anthropometry0
NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models0
Show:102550
← PrevPage 91 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified