SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 931940 of 1356 papers

TitleStatusHype
A Model Compression Method with Matrix Product Operators for Speech Enhancement0
Online Model Compression for Federated Learning with Large Models0
On Multilingual Encoder Language Model Compression for Low-Resource Languages0
On the Adversarial Robustness of Quantized Neural Networks0
On the Compression of Recurrent Neural Networks with an Application to LVCSR acoustic modeling for Embedded Speech Recognition0
On the Demystification of Knowledge Distillation: A Residual Network Perspective0
Towards Efficient Tensor Decomposition-Based DNN Model Compression with Optimization Framework0
On the Effectiveness of Low-Rank Matrix Factorization for LSTM Model Compression0
On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''0
On the social bias of speech self-supervised models0
Show:102550
← PrevPage 94 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified