SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 151200 of 1356 papers

TitleStatusHype
Activation Sparsity Opportunities for Compressing General Large Language Models0
Can Students Beyond The Teacher? Distilling Knowledge from Teacher's Bias0
Optimising TinyML with Quantization and Distillation of Transformer and Mamba Models for Indoor Localisation on Edge Devices0
Low-Rank Correction for Quantized LLMs0
VQ4ALL: Efficient Neural Network Representation via a Universal Codebook0
Compression for Better: A General and Stable Lossless Compression Framework0
Lossless Model Compression via Joint Low-Rank Factorization Optimization0
Trimming Down Large Spiking Vision Transformers via Heterogeneous Quantization Search0
CPTQuant -- A Novel Mixed Precision Post-Training Quantization Techniques for Large Language Models0
Efficient Model Compression Techniques with FishLeg0
Individual Content and Motion Dynamics Preserved Pruning for Video Diffusion Models0
Faithful Label-free Knowledge DistillationCode0
Efficient Pruning of Text-to-Image Models: Insights from Pruning Stable Diffusion0
TaQ-DiT: Time-aware Quantization for Diffusion Transformers0
FASTNav: Fine-tuned Adaptive Small-language-models Trained for Multi-point Robot Navigation0
What Makes a Good Dataset for Knowledge Distillation?0
Puppet-CNN: Input-Adaptive Convolutional Neural Networks with Model Compression using Ordinary Differential Equation0
Bridging the Resource Gap: Deploying Advanced Imitation Learning Models onto Affordable Embedded Platforms0
An exploration of the effect of quantisation on energy consumption and inference time of StarCoder2Code0
Re-Parameterization of Lightweight Transformer for On-Device Speech Emotion Recognition0
Feature Interaction Fusion Self-Distillation Network For CTR Prediction0
OWLed: Outlier-weighed Layerwise Pruning for Efficient Autonomous Driving FrameworkCode0
ASER: Activation Smoothing and Error Reconstruction for Large Language Model Quantization0
Optimizing Traffic Signal Control using High-Dimensional State Representation and Efficient Deep Reinforcement Learning0
ZipNN: Lossless Compression for AI ModelsCode3
From Word Vectors to Multimodal Embeddings: Techniques, Applications, and Future Directions For Large Language Models0
Change Is the Only Constant: Dynamic LLM Slicing based on Layer RedundancyCode0
Efficient Model Compression for Bayesian Neural Networks0
ML Research BenchmarkCode0
LLMCBench: Benchmarking Large Language Model Compression for Efficient DeploymentCode1
EoRA: Training-free Compensation for Compressed LLM with Eigenspace Low-Rank Approximation0
A Survey of Small Language Models0
SWITCH: Studying with Teacher for Knowledge Distillation of Large Language Models0
Beware of Calibration Data for Pruning Large Language Models0
Towards Effective Data-Free Knowledge Distillation via Diverse Diffusion AugmentationCode0
Self-calibration for Language Model Quantization and Pruning0
Identifying Sub-networks in Neural Networks via Functionally Similar Representations0
EvoPress: Towards Optimal Dynamic Model Compression via Evolutionary SearchCode1
Preview-based Category Contrastive Learning for Knowledge Distillation0
QIANets: Quantum-Integrated Adaptive Networks for Reduced Latency and Improved Inference Times in CNN ModelsCode0
SLiM: One-shot Quantization and Sparsity with Low-rank Approximation for LLM Weight CompressionCode1
What is Left After Distillation? How Knowledge Transfer Impacts Fairness and Bias0
CrossQuant: A Post-Training Quantization Method with Smaller Quantization Kernel for Precise Large Language Model Compression0
Large Language Model Compression with Neural Architecture Search0
QT-DoG: Quantization-aware Training for Domain GeneralizationCode1
SpaLLM: Unified Compressive Adaptation of Large Language Models with Sketching0
ESPACE: Dimensionality Reduction of Activations for Model Compression0
Continuous Approximations for Improving Quantization Aware Training of LLMs0
Geometry is All You Need: A Unified Taxonomy of Matrix and Tensor Factorization for Compression of Generative Language Models0
Basis Sharing: Cross-Layer Parameter Sharing for Large Language Model CompressionCode1
Show:102550
← PrevPage 4 of 28Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified