SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 681690 of 1356 papers

TitleStatusHype
Rank-Based Filter Pruning for Real-Time UAV Tracking0
RankDVQA-mini: Knowledge Distillation-Driven Deep Video Quality Assessment0
Rapid Deployment of DNNs for Edge Computing via Structured Pruning at Initialization0
Rate Distortion For Model Compression: From Theory To Practice0
Experimental implementation of a neural network optical channel equalizer in restricted hardware using pruning and quantization0
Real time backbone for semantic segmentation0
Membership Privacy for Machine Learning Models Through Knowledge Transfer0
Rectifying the Data Bias in Knowledge Distillation0
Recurrent Convolution for Compact and Cost-Adjustable Neural Networks: An Empirical Study0
Recurrent Convolutions: A Model Compression Point of View0
Show:102550
← PrevPage 69 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified