SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 926950 of 1356 papers

TitleStatusHype
One Teacher is Enough? Pre-trained Language Model Distillation from Multiple Teachers0
One Weight Bitwidth to Rule Them All0
On Linearizing Structured Data in Encoder-Decoder Language Models: Insights from Text-to-SQL0
Online Cross-Layer Knowledge Distillation on Graph Neural Networks with Deep Supervision0
Towards Efficient Full 8-bit Integer DNN Online Training on Resource-limited Devices without Batch Normalization0
A Model Compression Method with Matrix Product Operators for Speech Enhancement0
Online Model Compression for Federated Learning with Large Models0
On Multilingual Encoder Language Model Compression for Low-Resource Languages0
On the Adversarial Robustness of Quantized Neural Networks0
On the Compression of Recurrent Neural Networks with an Application to LVCSR acoustic modeling for Embedded Speech Recognition0
On the Demystification of Knowledge Distillation: A Residual Network Perspective0
Towards Efficient Tensor Decomposition-Based DNN Model Compression with Optimization Framework0
On the Effectiveness of Low-Rank Matrix Factorization for LSTM Model Compression0
On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''0
On the social bias of speech self-supervised models0
Weight Squeezing: Reparameterization for Knowledge Transfer and Model Compression0
A Mixed Integer Programming Approach for Verifying Properties of Binarized Neural Networks0
Optimal Policy Sparsification and Low Rank Decomposition for Deep Reinforcement Learning0
Optimising TinyML with Quantization and Distillation of Transformer and Mamba Models for Indoor Localisation on Edge Devices0
Optimization and Scalability of Collaborative Filtering Algorithms in Large Language Models0
Optimize Deep Convolutional Neural Network with Ternarized Weights and High Accuracy0
Optimizing LLMs for Resource-Constrained Environments: A Survey of Model Compression Techniques0
Optimizing Singular Spectrum for Large Language Model Compression0
Optimizing Small Language Models for In-Vehicle Function-Calling0
Optimizing Traffic Signal Control using High-Dimensional State Representation and Efficient Deep Reinforcement Learning0
Show:102550
← PrevPage 38 of 55Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified