SOTAVerified

Quantization

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Papers

Showing 46014650 of 4925 papers

TitleStatusHype
Optimizing edge AI models on HPC systems with the edge in the loopCode0
Efficient Mixed Precision Quantization in Graph Neural NetworksCode0
TAS: Ternarized Neural Architecture Search for Resource-Constrained Edge DevicesCode0
Neural Network Compression of ACAS Xu Early Prototype is Unsafe: Closed-Loop Verification through Quantized State BackreachabilityCode0
Accelerated Nearest Neighbor Search with Quick ADCCode0
Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision TransformersCode0
Communication-Efficient Federated Learning via Clipped Uniform QuantizationCode0
SNN-SC: A Spiking Semantic Communication Framework for Collaborative IntelligenceCode0
Optimizing the Communication-Accuracy Trade-off in Federated Learning with Rate-Distortion TheoryCode0
Optimizing the energy consumption of spiking neural networks for neuromorphic applicationsCode0
AxFormer: Accuracy-driven Approximation of Transformers for Faster, Smaller and more Accurate NLP ModelsCode0
Learning Bag-of-Features Pooling for Deep Convolutional Neural NetworksCode0
Orthonormal Product Quantization Network for Scalable Face Image RetrievalCode0
Variance Control via Weight Rescaling in LLM Pre-trainingCode0
Learning Accurate Performance Predictors for Ultrafast Automated Model CompressionCode0
Learning Accurate Low-Bit Deep Neural Networks with Stochastic QuantizationCode0
Understanding the Effect of Model Compression on Social Bias in Large Language ModelsCode0
Learned transform compression with optimized entropy encodingCode0
Visualizing hierarchies in scRNA-seq data using a density tree-biased autoencoderCode0
Audio Spectral Enhancement: Leveraging Autoencoders for Low Latency Reconstruction of Long, Lossy Audio SequencesCode0
Climate Finance BenchCode0
Language Models as Zero-shot Lossless Gradient Compressors: Towards General Neural Parameter Prior ModelsCode0
CLAQ: Pushing the Limits of Low-Bit Post-Training Quantization for LLMsCode0
Langevin dynamics based algorithm e-THO POULA for stochastic optimization problems with discontinuous stochastic gradientCode0
KVTuner: Sensitivity-Aware Layer-wise Mixed Precision KV Cache Quantization for Efficient and Nearly Lossless LLM InferenceCode0
Activations and Gradients Compression for Model-Parallel TrainingCode0
Overcoming Distribution Mismatch in Quantizing Image Super-Resolution NetworksCode0
Variational Inference with Latent Space Quantization for Adversarial ResilienceCode0
KP2Dtiny: Quantized Neural Keypoint Detection and Description on the EdgeCode0
TensorQuant - A Simulation Toolbox for Deep Neural Network QuantizationCode0
Real-Time Spacecraft Pose Estimation Using Mixed-Precision Quantized Neural Network on COTS Reconfigurable MPSoCCode0
U-Net Fixed-Point Quantization for Medical Image SegmentationCode0
Efficient Large-scale Approximate Nearest Neighbor Search on the GPUCode0
Addition is almost all you need: Compressing neural networks with double binary factorizationCode0
Just Round: Quantized Observation Spaces Enable Memory Efficient Learning of Dynamic LocomotionCode0
JPEG Inspired Deep LearningCode0
Joint Pruning and Channel-wise Mixed-Precision Quantization for Efficient Deep Neural NetworksCode0
Deep Learning as a Mixed Convex-Combinatorial Optimization ProblemCode0
Deep Image Compression via End-to-End LearningCode0
Joint Maximum Purity Forest with Application to Image Super-ResolutionCode0
Efficient Integer-Arithmetic-Only Convolutional Neural NetworksCode0
Efficient High-Resolution Template Matching with Vector Quantized Nearest Neighbour FieldsCode0
Deep Hashing via Householder QuantizationCode0
Iterative Training: Finding Binary Weight Deep Neural Networks with Layer BinarizationCode0
I&S-ViT: An Inclusive & Stable Method for Pushing the Limit of Post-Training ViTs QuantizationCode0
Is PGD-Adversarial Training Necessary? Alternative Training via a Soft-Quantization Network with Noisy-Natural Samples OnlyCode0
WAVEQ: GRADIENT-BASED DEEP QUANTIZATION OF NEURAL NETWORKS THROUGH SINUSOIDAL REGULARIZATIONCode0
Efficient Federated Intrusion Detection in 5G ecosystem using optimized BERT-based modelCode0
Parallel Blockwise Knowledge Distillation for Deep Neural Network CompressionCode0
Deep Convolutional AutoEncoder-based Lossy Image CompressionCode0
Show:102550
← PrevPage 93 of 99Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1FQ-ViT (ViT-L)Top-1 Accuracy (%)85.03Unverified
2FQ-ViT (ViT-B)Top-1 Accuracy (%)83.31Unverified
3FQ-ViT (Swin-B)Top-1 Accuracy (%)82.97Unverified
4FQ-ViT (Swin-S)Top-1 Accuracy (%)82.71Unverified
5FQ-ViT (DeiT-B)Top-1 Accuracy (%)81.2Unverified
6FQ-ViT (Swin-T)Top-1 Accuracy (%)80.51Unverified
7FQ-ViT (DeiT-S)Top-1 Accuracy (%)79.17Unverified
8Xception W8A8Top-1 Accuracy (%)78.97Unverified
9ADLIK-MO-ResNet50-W4A4Top-1 Accuracy (%)77.88Unverified
10ADLIK-MO-ResNet50-W3A4Top-1 Accuracy (%)77.34Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_3MAP160,327.04Unverified
2DTQMAP0.79Unverified
#ModelMetricClaimedVerifiedStatus
1OutEffHop-Bert_basePerplexity6.3Unverified
2OutEffHop-Bert_basePerplexity6.21Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy98.13Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy92.92Unverified
#ModelMetricClaimedVerifiedStatus
1SSD ResNet50 V1 FPN 640x640MAP34.3Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-495.13Unverified
#ModelMetricClaimedVerifiedStatus
1TAR @ FAR=1e-496.38Unverified
#ModelMetricClaimedVerifiedStatus
13DCNN_VIVA_5All84,809,664Unverified
#ModelMetricClaimedVerifiedStatus
1Accuracy99.8Unverified