SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 551600 of 4925 papers

TitleStatusHype
RL-RC-DoT: A Block-level RL agent for Task-Aware Video Compression0
HAC++: Towards 100X Compression of 3D Gaussian SplattingCode3
Practical Modulo Sampling: Mitigating High-Frequency Components0
Communication-Efficient Federated Learning by Quantized Variance Reduction for Heterogeneous Wireless Edge Networks0
Personalized Federated Learning for Cellular VR: Online Learning and Dynamic Caching0
Ditto: Accelerating Diffusion Model via Temporal Value Similarity0
DC-PCN: Point Cloud Completion Network with Dual-Codebook Guided Quantization0
LiFT: Lightweight, FPGA-tailored 3D object detection based on LiDAR dataCode0
BeST -- A Novel Source Selection Metric for Transfer Learning0
A Novel Hybrid Precoder With Low-Resolution Phase Shifters and Fronthaul Capacity Limitation0
LUT-DLA: Lookup Table as Efficient Extreme Low-Bit Deep Learning Accelerator0
4bit-Quantization in Vector-Embedding for RAGCode0
Lossless Compression of Vector IDs for Approximate Nearest Neighbor SearchCode2
Atleus: Accelerating Transformers on the Edge Enabled by 3D Heterogeneous Manycore Architectures0
The Devil is in the Details: Simple Remedies for Image-to-LiDAR Representation Learning0
Real-time Indexing for Large-scale Recommendation by Streaming Vector Quantization Retriever0
Rethinking Post-Training Quantization: Introducing a Statistical Pre-Calibration Approach0
Large Language Models For Text Classification: Case Study And Comprehensive Review0
D^2-DPM: Dual Denoising for Quantized Diffusion Probabilistic ModelsCode1
Koopman Meets Limited Bandwidth: Effect of Quantization on Data-Driven Linear Prediction and Control of Nonlinear Systems0
Dataset Distillation as Pushforward Optimal Quantization0
FlexQuant: Elastic Quantization Framework for Locally Hosted LLM on Edge Devices0
QuantuneV2: Compiler-Based Local Metric-Driven Mixed Precision Quantization for Practical Embedded AI Applications0
ZOQO: Zero-Order Quantized Optimization0
DiscQuant: A Quantization Method for Neural Networks Inspired by Discrepancy TheoryCode0
Precoding Design for Limited-Feedback MISO Systems via Character-Polynomial Codes0
Estimation and Restoration of Unknown Nonlinear Distortion using DiffusionCode0
Mix-QViT: Mixed-Precision Vision Transformer Quantization Driven by Layer Importance and Quantization Sensitivity0
kANNolo: Sweet and Smooth Approximate k-Nearest Neighbors SearchCode1
Neural Architecture Codesign for Fast Physics ApplicationsCode0
Knowledge Transfer in Model-Based Reinforcement Learning Agents for Efficient Multi-Task Learning0
JAQ: Joint Efficient Architecture Design and Low-Bit Quantization with Hardware-Software Co-Exploration0
DGQ: Distribution-Aware Group Quantization for Text-to-Image Diffusion ModelsCode1
Histogram-Equalized Quantization for logic-gated Residual Neural Networks0
UPAQ: A Framework for Real-Time and Energy-Efficient 3D Object Detection in Autonomous Vehicles0
Effective and Efficient Mixed Precision Quantization of Speech Foundation Models0
The Power of Negative Zero: Datatype Customization for Quantized Large Language ModelsCode0
Quantization Meets Reasoning: Exploring LLM Low-Bit Quantization Degradation for Mathematical Reasoning0
A Novel Structure-Agnostic Multi-Objective Approach for Weight-Sharing Compression in Deep Neural Networks0
Qinco2: Vector Compression and Search with Improved Implicit Neural CodebooksCode2
Scaling Laws for Floating Point Quantization Training0
Remote Inference over Dynamic Links via Adaptive Rate Deep Task-Oriented Vector QuantizationCode0
HALO: Hadamard-Assisted Lower-Precision Optimization for LLMsCode1
TAPAS: Thermal- and Power-Aware Scheduling for LLM Inference in Cloud Platforms0
Optimizing Edge AI: A Comprehensive Survey on Data, Model, and System StrategiesCode2
Optimizing Small Language Models for In-Vehicle Function-Calling0
Millimeter-Wave Energy-Efficient Hybrid Beamforming Architecture and Algorithm0
Compressed Domain Prior-Guided Video Super-Resolution for Cloud Gaming Content0
Modulo Sampling: Performance Guarantees in The Presence of Quantization0
TreeLUT: An Efficient Alternative to Deep Neural Networks for Inference Acceleration Using Gradient Boosted Decision TreesCode0
Show:102550
← PrevPage 12 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified