SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 201225 of 1746 papers

TitleStatusHype
IRAD: Implicit Representation-driven Image Resampling against Adversarial AttacksCode1
Is LLM-as-a-Judge Robust? Investigating Universal Adversarial Attacks on Zero-shot LLM AssessmentCode1
Adversarial Robustness via Random Projection FiltersCode1
A Perturbation-Constrained Adversarial Attack for Evaluating the Robustness of Optical FlowCode1
Adversarial Robustness Against the Union of Multiple Threat ModelsCode1
Learning Adversarially Robust Representations via Worst-Case Mutual Information MaximizationCode1
A Regularization Method to Improve Adversarial Robustness of Neural Networks for ECG Signal ClassificationCode1
A Unified Analysis of Mixed Sample Data Augmentation: A Loss Function PerspectiveCode1
Mitigating Accuracy-Robustness Trade-off via Balanced Multi-Teacher Adversarial DistillationCode1
MNIST-C: A Robustness Benchmark for Computer VisionCode1
Multi-scale Diffusion Denoised SmoothingCode1
Multitask Learning Strengthens Adversarial RobustnessCode1
ARAE: Adversarially Robust Training of Autoencoders Improves Novelty DetectionCode1
Neural Networks with Recurrent Generative FeedbackCode1
Adversarial Training of Self-supervised Monocular Depth Estimation against Physical-World AttacksCode1
AdvDrop: Adversarial Attack to DNNs by Dropping InformationCode1
Bag of Tricks for Adversarial TrainingCode1
Adversarial Vertex Mixup: Toward Better Adversarially Robust GeneralizationCode1
Certified Training: Small Boxes are All You NeedCode1
Adversarial vulnerability of powerful near out-of-distribution detectionCode1
Adversarial Vulnerability of Randomized EnsemblesCode1
Adversarial Machine Learning: Bayesian PerspectivesCode1
GenoArmory: A Unified Evaluation Framework for Adversarial Attacks on Genomic Foundation ModelsCode1
Are Transformers More Robust Than CNNs?Code1
Efficient Image-to-Image Diffusion Classifier for Adversarial RobustnessCode1
Show:102550
← PrevPage 9 of 70Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified