SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 125 of 1746 papers

TitleStatusHype
AugLy: Data Augmentations for RobustnessCode5
LORE: Lagrangian-Optimized Robust Embeddings for Visual EncodersCode4
Adversarial Robustness Toolbox v1.0.0Code3
Improving Alignment and Robustness with Circuit BreakersCode3
Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial ExamplesCode3
Quantifying the robustness of deep multispectral segmentation models against natural perturbations and data poisoningCode3
Fast Minimum-norm Adversarial Attacks through Adaptive Norm ConstraintsCode2
RAID: A Shared Benchmark for Robust Evaluation of Machine-Generated Text DetectorsCode2
On Evaluating Adversarial Robustness of Large Vision-Language ModelsCode2
Dissecting Adversarial Robustness of Multimodal LM AgentsCode2
CLAIMED, a visual and scalable component library for Trusted AICode2
MIBench: A Comprehensive Framework for Benchmarking Model Inversion Attack and DefenseCode2
Artificial Kuramoto Oscillatory NeuronsCode2
ALERT: A Comprehensive Benchmark for Assessing Large Language Models' Safety through Red TeamingCode2
An Unsupervised Approach to Achieve Supervised-Level Explainability in Healthcare RecordsCode2
Authorship Obfuscation in Multilingual Machine-Generated Text DetectionCode2
One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language ModelsCode2
A Survey on Deep Neural Network Pruning-Taxonomy, Comparison, Analysis, and RecommendationsCode2
Adversarial Robustification via Text-to-Image Diffusion ModelsCode1
GenoArmory: A Unified Evaluation Framework for Adversarial Attacks on Genomic Foundation ModelsCode1
Adversarial Image Color Transformations in Explicit Color Filter SpaceCode1
Adversarial Pruning: A Survey and Benchmark of Pruning Methods for Adversarial RobustnessCode1
Adversarial Machine Learning: Bayesian PerspectivesCode1
Adversarially Robust DistillationCode1
Adversarial Prompt Tuning for Vision-Language ModelsCode1
Show:102550
← PrevPage 1 of 70Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified