SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 11011150 of 4925 papers

TitleStatusHype
FGMP: Fine-Grained Mixed-Precision Weight and Activation Quantization for Hardware-Accelerated LLM Inference0
Lightweight Road Environment Segmentation using Vector Quantization0
Gradual Binary Search and Dimension Expansion : A general method for activation quantization in LLMs0
From Large to Super-Tiny: End-to-End Optimization for Cost-Efficient LLMs0
The Binary and Ternary Quantization Can Improve Feature Discrimination0
ImPart: Importance-Aware Delta-Sparsification for Improved Model Compression and Merging in LLMsCode0
FedX: Adaptive Model Decomposition and Quantization for IoT Federated Learning0
D^2MoE: Dual Routing and Dynamic Scheduling for Efficient On-Device MoE-based LLM Serving0
GT-SVQ: A Linear-Time Graph Transformer for Node Classification Using Spiking Vector QuantizationCode0
Résumé abstractif à partir d'une transcription audio0
ESC-MVQ: End-to-End Semantic Communication With Multi-Codebook Vector Quantization0
Neural Network Emulation of the Classical Limit in Quantum Systems via Learned Observable Mappings0
GOAT-TTS: Expressive and Realistic Speech Generation via A Dual-Branch LLM0
CSPLADE: Learned Sparse Retrieval with Causal Language Models0
Quantization Error Propagation: Revisiting Layer-Wise Post-Training Quantization0
Simultaneous Input and State Estimation under Output Quantization: A Gaussian Mixture approach0
Asymptotic stabilization under homomorphic encryption: A re-encryption free method0
Deploying Large AI Models on Resource-Limited Devices with Split Federated Learning0
SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting0
MixDiT: Accelerating Image Diffusion Transformer Inference with Mixed-Precision MX Quantization0
Muon-Accelerated Attention Distillation for Real-Time Edge Synthesis via Optimized Latent Diffusion0
MotionDreamer: One-to-Many Motion Synthesis with Localized Generative Masked Transformer0
APSQ: Additive Partial Sum Quantization with Algorithm-Hardware Co-DesignCode0
PoGO: A Scalable Proof of Useful Work via Quantized Gradient Descent and Merkle Proofs0
CHIME: A Compressive Framework for Holistic Interest Modeling0
BBQRec: Behavior-Bind Quantization for Multi-Modal Sequential Recommendation0
Achieving binary weight and activation for LLMs using Post-Training Quantization0
Two is Better than One: Efficient Ensemble Defense for Robust and Compact Models0
AccLLM: Accelerating Long-Context LLM Inference Via Algorithm-Hardware Co-Design0
Are You Getting What You Pay For? Auditing Model Substitution in LLM APIsCode0
Bridging the Gap between Continuous and Informative Discrete Representations by Random Product Quantization0
Balancing Robustness and Efficiency in Embedded DNNs Through Activation Function Selection0
PRIMA.CPP: Speeding Up 70B-Scale LLM Inference on Low-Resource Everyday Home ClustersCode0
Skin Color Measurement from Dermatoscopic Images: An Evaluation on a Synthetic Dataset0
Autoregressive High-Order Finite Difference Modulo Imaging: High-Dynamic Range for Computer Vision Applications0
Shape My Moves: Text-Driven Shape-Aware Synthesis of Human Motions0
Efficient FPGA-accelerated Convolutional Neural Networks for Cloud Detection on CubeSats0
Sustainable LLM Inference for Edge AI: Evaluating Quantized LLMs for Energy Efficiency, Output Accuracy, and Inference Latency0
Compressing 3D Gaussian Splatting by Noise-Substituted Vector QuantizationCode0
HPGN: Hybrid Priors-Guided Network for Compressed Low-Light Image Enhancement0
Bridging the Gap between Gaussian Diffusion Models and Universal Quantization for Image Compression0
Moment Quantization for Video Temporal Grounding0
When Reasoning Meets Compression: Benchmarking Compressed Large Reasoning Models on Complex Reasoning Tasks0
LLMPi: Optimizing LLMs for High-Throughput on Raspberry Pi0
QSViT: A Methodology for Quantizing Spiking Vision Transformers0
Model Hemorrhage and the Robustness Limits of Large Language Models0
Style Quantization for Data-Efficient GAN Training0
SQuat: Subspace-orthogonal KV Cache Quantization0
Cocktail: Chunk-Adaptive Mixed-Precision Quantization for Long-Context LLM Inference0
NeuralGS: Bridging Neural Fields and 3D Gaussian Splatting for Compact 3D Representations0
Show:102550
← PrevPage 23 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified