SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 151175 of 1746 papers

TitleStatusHype
Improving Adversarial Robustness via Mutual Information EstimationCode1
Decoupled Adversarial Contrastive Learning for Self-supervised Adversarial RobustnessCode1
Tailoring Self-Supervision for Supervised LearningCode1
Adversarial Contrastive Learning via Asymmetric InfoNCECode1
CARBEN: Composite Adversarial Robustness BenchmarkCode1
Distance Learner: Incorporating Manifold Prior to Model TrainingCode1
Adversarially-Aware Robust Object DetectorCode1
Removing Batch Normalization Boosts Adversarial TrainingCode1
(Certified!!) Adversarial Robustness for Free!Code1
Robust Deep Reinforcement Learning through Bootstrapped Opportunistic CurriculumCode1
Towards Adversarial Attack on Vision-Language Pre-training ModelsCode1
Understanding Robust Overfitting of Adversarial Training and BeyondCode1
Adversarial Vulnerability of Randomized EnsemblesCode1
CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision ModelsCode1
FedNest: Federated Bilevel, Minimax, and Compositional OptimizationCode1
Flooding-X: Improving BERT’s Resistance to Adversarial Attacks via Loss-Restricted Fine-TuningCode1
Engineering flexible machine learning systems by traversing functionally-invariant pathsCode1
Distilling Robust and Non-Robust Features in Adversarial Examples by Information BottleneckCode1
Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse NetworkCode1
How to Robustify Black-Box ML Models? A Zeroth-Order Optimization PerspectiveCode1
A Perturbation-Constrained Adversarial Attack for Evaluating the Robustness of Optical FlowCode1
Practical Evaluation of Adversarial Robustness via Adaptive Auto AttackCode1
Joint rotational invariance and adversarial training of a dual-stream Transformer yields state of the art Brain-Score for Area V4Code1
ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial PatchesCode1
Enhancing Adversarial Robustness for Deep Metric LearningCode1
Show:102550
← PrevPage 7 of 70Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified