SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 601650 of 1356 papers

TitleStatusHype
On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''0
VIC-KD: Variance-Invariance-Covariance Knowledge Distillation to Make Keyword Spotting More Robust Against Adversarial Attacks0
Pruning Large Language Models via Accuracy Predictor0
Training dynamic models using early exits for automatic speech recognition on resource-constrained devicesCode0
Two-Step Knowledge Distillation for Tiny Speech Enhancement0
CoLLD: Contrastive Layer-to-layer Distillation for Compressing Multilingual Pre-trained Speech Encoders0
Training Acceleration of Low-Rank Decomposed Networks using Sequential Freezing and Rank Quantization0
Norm Tweaking: High-performance Low-bit Quantization of Large Language Models0
Compressing Vision Transformers for Low-Resource Visual LearningCode0
ADC/DAC-Free Analog Acceleration of Deep Neural Networks with Frequency Transformation0
Uncovering the Hidden Cost of Model CompressionCode0
Computation-efficient Deep Learning for Computer Vision: A Survey0
Improving Knowledge Distillation for BERT Models: Loss Functions, Mapping Methods, and Weight Tuning0
DLIP: Distilling Language-Image Pre-training0
QD-BEV : Quantization-aware View-guided Distillation for Multi-view 3D Object Detection0
Learning Disentangled Representation with Mutual Information Maximization for Real-Time UAV Tracking0
SHARK: A Lightweight Model Compression Approach for Large-scale Recommender Systems0
Spike-and-slab shrinkage priors for structurally sparse Bayesian neural networks0
Benchmarking Adversarial Robustness of Compressed Deep Learning Models0
Shortcut-V2V: Compression Framework for Video-to-Video Translation based on Temporal Redundancy Reduction0
A Survey on Model Compression for Large Language Models0
FedEdge AI-TC: A Semi-supervised Traffic Classification Method based on Trusted Federated Deep Learning for Mobile Edge Computing0
Resource Constrained Model Compression via Minimax Optimization for Spiking Neural NetworksCode0
Accurate Neural Network Pruning Requires Rethinking Sparse Optimization0
MIMONet: Multi-Input Multi-Output On-Device Deep Learning0
Model Compression Methods for YOLOv5: A Review0
Impact of Disentanglement on Pruning Neural Networks0
Knowledge Distillation for Object Detection: from generic to remote sensing datasets0
CA-LoRA: Adapting Existing LoRA for Compressed LLMs to Enable Efficient Multi-Tasking on Personal DevicesCode0
Distilling Universal and Joint Knowledge for Cross-Domain Model Compression on Time Series DataCode0
Distilled Pruning: Using Synthetic Data to Win the LotteryCode0
TensorGPT: Efficient Compression of Large Language Models based on Tensor-Train Decomposition0
Data-Free Quantization via Mixed-Precision Compensation without Fine-Tuning0
An Efficient Sparse Inference Software Accelerator for Transformer-based Language Models on CPUs0
Low-Rank Prune-And-Factorize for Language Model Compression0
Feature Adversarial Distillation for Point Cloud Classification0
Partitioning-Guided K-Means: Extreme Empty Cluster Resolution for Extreme Model Compression0
Data-Free Backbone Fine-Tuning for Pruned Neural NetworksCode0
DynaQuant: Compressing Deep Learning Training Checkpoints via Dynamic Quantization0
LoSparse: Structured Compression of Large Language Models based on Low-Rank and Sparse Approximation0
Neural Network Compression using Binarization and Few Full-Precision Weights0
Deep Model Compression Also Helps Models Capture AmbiguityCode0
A Brief Review of Hypernetworks in Deep LearningCode0
Riemannian Low-Rank Model Compression for Federated Learning with Over-the-Air Aggregation0
Modular Transformers: Compressing Transformers into Modularized Layers for Flexible Efficient Inference0
Low-Complexity Acoustic Scene Classification Using Data Augmentation and Lightweight ResNet0
Group channel pruning and spatial attention distilling for object detection0
Task-Agnostic Structured Pruning of Speech Representation Models0
ConaCLIP: Exploring Distillation of Fully-Connected Knowledge Interaction Graph for Lightweight Text-Image Retrieval0
2-bit Conformer quantization for automatic speech recognition0
Show:102550
← PrevPage 13 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified