SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 201225 of 1746 papers

TitleStatusHype
Learning Adversarially Robust Representations via Worst-Case Mutual Information MaximizationCode1
AGKD-BML: Defense Against Adversarial Attack by Attention Guided Knowledge Distillation and Bi-directional Metric LearningCode1
Consistency Regularization for Adversarial RobustnessCode1
MENLI: Robust Evaluation Metrics from Natural Language InferenceCode1
Broken Neural Scaling LawsCode1
Mitigating Adversarial Vulnerability through Causal Parameter Estimation by Adversarial Double Machine LearningCode1
CARBEN: Composite Adversarial Robustness BenchmarkCode1
Certified Adversarial Robustness via Randomized SmoothingCode1
CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision ModelsCode1
Multitask Learning Strengthens Adversarial RobustnessCode1
Decoupled Adversarial Contrastive Learning for Self-supervised Adversarial RobustnessCode1
The Eigenlearning Framework: A Conservation Law Perspective on Kernel Regression and Wide Neural NetworksCode1
OET: Optimization-based prompt injection Evaluation ToolkitCode1
Cauchy-Schwarz Divergence Information Bottleneck for RegressionCode1
Efficient Image-to-Image Diffusion Classifier for Adversarial RobustnessCode1
AdvDrop: Adversarial Attack to DNNs by Dropping InformationCode1
Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative AttacksCode1
Adversarial Vertex Mixup: Toward Better Adversarially Robust GeneralizationCode1
Adversarial Visual Robustness by Causal InterventionCode1
Adversarial vulnerability of powerful near out-of-distribution detectionCode1
Adversarial Vulnerability of Randomized EnsemblesCode1
Adversarial Machine Learning: Bayesian PerspectivesCode1
GenoArmory: A Unified Evaluation Framework for Adversarial Attacks on Genomic Foundation ModelsCode1
Certified Training: Small Boxes are All You NeedCode1
Federated Robustness Propagation: Sharing Robustness in Heterogeneous Federated LearningCode1
Show:102550
← PrevPage 9 of 70Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified