SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 901950 of 1356 papers

TitleStatusHype
YANMTT: Yet Another Neural Machine Translation Toolkit0
Small Object Detection Based on Modified FSSD and Model Compression0
Scaling Laws for Deep Learning0
Pruning vs XNOR-Net: A Comprehensive Study of Deep Learning for Audio Classification on Edge-devicesCode0
Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data0
Visual Domain Adaptation for Monocular Depth Estimation on Resource-Constrained HardwareCode0
Random Offset Block Embedding Array (ROBE) for CriteoTB Benchmark MLPerf DLRM Model : 1000 Compression and 3.1 Faster Inference0
Learning a Neural Diff for Speech Models0
QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning0
Towards Efficient Tensor Decomposition-Based DNN Model Compression with Optimization Framework0
Pruning Ternary Quantization0
Accelerating deep neural networks for efficient scene understanding in automotive cyber-physical systems0
A New Clustering-Based Technique for the Acceleration of Deep Convolutional Networks0
Federated Action Recognition on Heterogeneous Embedded Devices0
Efficient automated U-Net based tree crown delineation using UAV multi-spectral imagery on embedded devices0
Compact and Optimal Deep Learning with Recurrent Parameter GeneratorsCode0
Model compression as constrained optimization, with application to neural nets. Part V: combining compressions0
WeClick: Weakly-Supervised Video Semantic Segmentation with Click Annotations0
Universal approximation and model compression for radial neural networksCode0
A Light-weight Deep Human Activity Recognition Algorithm Using Multi-knowledge Distillation0
Investigation of Practical Aspects of Single Channel Speech Separation for ASR0
A Lottery Ticket Hypothesis Framework for Low-Complexity Device-Robust Neural Acoustic Scene Classification0
Pool of Experts: Realtime Querying Specialized Knowledge in Massive Neural NetworksCode0
Exact Backpropagation in Binary Weighted Networks with Group Weight TransformationsCode0
Image Classification with CondenseNeXt for ARM-Based Computing PlatformsCode0
Scalable Teacher Forcing Network for Semi-Supervised Large Scale Data Streams0
PQK: Model Compression via Pruning, Quantization, and Knowledge Distillation0
Minimally Invasive Surgery for Sparse Neural Networks in Contrastive Manner0
Network Pruning via Performance MaximizationCode0
Data-Free Knowledge Distillation for Image Super-ResolutionCode0
Quantized Neural Networks via -1, +1 Encoding Decomposition and AccelerationCode0
How does topology of neural architectures impact gradient propagation and model performance?Code0
Topology Distillation for Recommender System0
Masked Training of Neural Networks with Partial Gradients0
Energy-efficient Knowledge Distillation for Spiking Neural Networks0
Heterogeneous Federated Learning using Dynamic Model Pruning and Adaptive Gradient0
FedNILM: Applying Federated Learning to NILM Applications at the Edge0
FedNL: Making Newton-Type Methods Applicable to Federated Learning0
Feature Flow Regularization: Improving Structured Sparsity in Deep Neural Networks0
One Teacher is Enough? Pre-trained Language Model Distillation from Multiple Teachers0
Energy-Efficient Model Compression and Splitting for Collaborative Inference Over Time-Varying Channels0
On Attention Redundancy: A Comprehensive Study0
NAS-BERT: Task-Agnostic and Adaptive-Size BERT Compression with Neural Architecture Search0
Towards Efficient Full 8-bit Integer DNN Online Training on Resource-limited Devices without Batch Normalization0
Differentiable Sparsification for Deep Neural Networks0
Model Compression0
How to Explain Neural Networks: an Approximation Perspective0
3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low BitwidthQuantization, and Ultra-Low Latency Acceleration0
Test-Time Adaptation Toward Personalized Speech Enhancement: Zero-Shot Learning with Knowledge Distillation0
Neural 3D Scene Compression via Model Compression0
Show:102550
← PrevPage 19 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified