SOTAVerified

Model Compression

Model Compression is an actively pursued area of research over the last few years with the goal of deploying state-of-the-art deep networks in low-power and resource limited devices without significant drop in accuracy. Parameter pruning, low-rank factorization and weight quantization are some of the proposed methods to compress the size of deep networks.

Source: KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow

Papers

Showing 941950 of 1356 papers

TitleStatusHype
Weight Squeezing: Reparameterization for Knowledge Transfer and Model Compression0
A Mixed Integer Programming Approach for Verifying Properties of Binarized Neural Networks0
Optimal Policy Sparsification and Low Rank Decomposition for Deep Reinforcement Learning0
Optimising TinyML with Quantization and Distillation of Transformer and Mamba Models for Indoor Localisation on Edge Devices0
Optimization and Scalability of Collaborative Filtering Algorithms in Large Language Models0
Optimize Deep Convolutional Neural Network with Ternarized Weights and High Accuracy0
Optimizing LLMs for Resource-Constrained Environments: A Survey of Model Compression Techniques0
Optimizing Singular Spectrum for Large Language Model Compression0
Optimizing Small Language Models for In-Vehicle Function-Calling0
Optimizing Traffic Signal Control using High-Dimensional State Representation and Efficient Deep Reinforcement Learning0
Show:102550
← PrevPage 95 of 136Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MobileBERT + 2bit-1dim model compression using DKMAccuracy82.13Unverified
2MobileBERT + 1bit-1dim model compression using DKMAccuracy63.17Unverified