SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 11261150 of 4925 papers

TitleStatusHype
BBQRec: Behavior-Bind Quantization for Multi-Modal Sequential Recommendation0
Achieving binary weight and activation for LLMs using Post-Training Quantization0
Two is Better than One: Efficient Ensemble Defense for Robust and Compact Models0
AccLLM: Accelerating Long-Context LLM Inference Via Algorithm-Hardware Co-Design0
Are You Getting What You Pay For? Auditing Model Substitution in LLM APIsCode0
Bridging the Gap between Continuous and Informative Discrete Representations by Random Product Quantization0
Balancing Robustness and Efficiency in Embedded DNNs Through Activation Function Selection0
PRIMA.CPP: Speeding Up 70B-Scale LLM Inference on Low-Resource Everyday Home ClustersCode0
Skin Color Measurement from Dermatoscopic Images: An Evaluation on a Synthetic Dataset0
Autoregressive High-Order Finite Difference Modulo Imaging: High-Dynamic Range for Computer Vision Applications0
Shape My Moves: Text-Driven Shape-Aware Synthesis of Human Motions0
Efficient FPGA-accelerated Convolutional Neural Networks for Cloud Detection on CubeSats0
Sustainable LLM Inference for Edge AI: Evaluating Quantized LLMs for Energy Efficiency, Output Accuracy, and Inference Latency0
Compressing 3D Gaussian Splatting by Noise-Substituted Vector QuantizationCode0
HPGN: Hybrid Priors-Guided Network for Compressed Low-Light Image Enhancement0
Bridging the Gap between Gaussian Diffusion Models and Universal Quantization for Image Compression0
Moment Quantization for Video Temporal Grounding0
When Reasoning Meets Compression: Benchmarking Compressed Large Reasoning Models on Complex Reasoning Tasks0
LLMPi: Optimizing LLMs for High-Throughput on Raspberry Pi0
QSViT: A Methodology for Quantizing Spiking Vision Transformers0
Model Hemorrhage and the Robustness Limits of Large Language Models0
Style Quantization for Data-Efficient GAN Training0
SQuat: Subspace-orthogonal KV Cache Quantization0
Cocktail: Chunk-Adaptive Mixed-Precision Quantization for Long-Context LLM Inference0
NeuralGS: Bridging Neural Fields and 3D Gaussian Splatting for Compact 3D Representations0
Show:102550
← PrevPage 46 of 197Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified