SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 851875 of 1356 papers

TitleStatusHype
Unraveling Key Factors of Knowledge Distillation0
Unsupervised model compression for multilayer bootstrap networks0
UPAQ: A Framework for Real-Time and Energy-Efficient 3D Object Detection in Autonomous Vehicles0
USDC: Unified Static and Dynamic Compression for Visual Transformer0
USM-Lite: Quantization and Sparsity Aware Fine-tuning for Speech Recognition with Universal Speech Models0
Value-Based Deep Multi-Agent Reinforcement Learning with Dynamic Sparse Training0
Variational autoencoder-based neural network model compression0
VIC-KD: Variance-Invariance-Covariance Knowledge Distillation to Make Keyword Spotting More Robust Against Adversarial Attacks0
Vision Foundation Models in Medical Image Analysis: Advances and Challenges0
Vision-Language Models for Edge Networks: A Comprehensive Survey0
Vision Transformers on the Edge: A Comprehensive Survey of Model Compression and Acceleration Strategies0
VQ4ALL: Efficient Neural Network Representation via a Universal Codebook0
Wasserstein Contrastive Representation Distillation0
Watermarking Graph Neural Networks by Random Graphs0
WeClick: Weakly-Supervised Video Semantic Segmentation with Click Annotations0
Weight, Block or Unit? Exploring Sparsity Tradeoffs for Speech Enhancement on Tiny Neural Accelerators0
Weight Normalization based Quantization for Deep Neural Network Compression0
Weight Squeezing: Reparameterization for Knowledge Transfer and Model Compression0
Weight Squeezing: Reparameterization for Compression and Fast Inference0
Weight Squeezing: Reparameterization for Knowledge Transfer and Model Compression0
Robustness Challenges in Model Distillation and Pruning for Natural Language Understanding0
What do larger image classifiers memorise?0
What is Left After Distillation? How Knowledge Transfer Impacts Fairness and Bias0
What is Lost in Knowledge Distillation?0
What Makes a Good Dataset for Knowledge Distillation?0
Show:102550
← PrevPage 35 of 55Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified