SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 10511100 of 1356 papers

TitleStatusHype
Neural Architecture Codesign for Fast Bragg Peak Analysis0
Neural Network Compression for Noisy Storage Devices0
Neural Network Compression using Binarization and Few Full-Precision Weights0
Neural Network Compression Via Sparse Optimization0
Neural Network Pruning by Cooperative Coevolution0
Neural Regularized Domain Adaptation for Chinese Word Segmentation0
NeuSemSlice: Towards Effective DNN Model Maintenance via Neuron-level Semantic Slicing0
Noisy Neural Network Compression for Analog Storage Devices0
Understanding the Performance Horizon of the Latest ML Workloads with NonGEMM Workloads0
Non-Structured DNN Weight Pruning -- Is It Beneficial in Any Platform?0
Normalized Feature Distillation for Semantic Segmentation0
Norm Tweaking: High-performance Low-bit Quantization of Large Language Models0
NurtureNet: A Multi-task Video-based Approach for Newborn Anthropometry0
NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models0
NVRC: Neural Video Representation Compression0
oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes0
On Accelerating Edge AI: Optimizing Resource-Constrained Environments0
On Achieving Privacy-Preserving State-of-the-Art Edge Intelligence0
Data-Independent Neural Pruning via Coresets0
On Attention Redundancy: A Comprehensive Study0
Onboard Optimization and Learning: A Survey0
Once-Tuning-Multiple-Variants: Tuning Once and Expanded as Multiple Vision-Language Model Variants0
On-Device Document Classification using multimodal features0
On-Device Qwen2.5: Efficient LLM Inference with Model Compression and Hardware Acceleration0
One-Shot Model for Mixed-Precision Quantization0
One Teacher is Enough? Pre-trained Language Model Distillation from Multiple Teachers0
One Weight Bitwidth to Rule Them All0
On Linearizing Structured Data in Encoder-Decoder Language Models: Insights from Text-to-SQL0
Online Cross-Layer Knowledge Distillation on Graph Neural Networks with Deep Supervision0
Online Model Compression for Federated Learning with Large Models0
On Multilingual Encoder Language Model Compression for Low-Resource Languages0
On the Adversarial Robustness of Quantized Neural Networks0
On the Compression of Recurrent Neural Networks with an Application to LVCSR acoustic modeling for Embedded Speech Recognition0
On the Demystification of Knowledge Distillation: A Residual Network Perspective0
On the Effectiveness of Low-Rank Matrix Factorization for LSTM Model Compression0
On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''0
On the social bias of speech self-supervised models0
Optimal Policy Sparsification and Low Rank Decomposition for Deep Reinforcement Learning0
Optimising TinyML with Quantization and Distillation of Transformer and Mamba Models for Indoor Localisation on Edge Devices0
Optimization and Scalability of Collaborative Filtering Algorithms in Large Language Models0
Optimize Deep Convolutional Neural Network with Ternarized Weights and High Accuracy0
Optimizing LLMs for Resource-Constrained Environments: A Survey of Model Compression Techniques0
Optimizing Singular Spectrum for Large Language Model Compression0
Optimizing Small Language Models for In-Vehicle Function-Calling0
Optimizing Traffic Signal Control using High-Dimensional State Representation and Efficient Deep Reinforcement Learning0
OPTISHEAR: Towards Efficient and Adaptive Pruning of Large Language Models via Evolutionary Optimization0
Oracle Teacher: Leveraging Target Information for Better Knowledge Distillation of CTC Models0
OTOV2: Automatic, Generic, User-Friendly0
Outsourcing Training without Uploading Data via Efficient Collaborative Open-Source Sampling0
Pacemaker: Intermediate Teacher Knowledge Distillation For On-The-Fly Convolutional Neural Network0
Show:102550
← PrevPage 22 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified