SOTAVerified

Adversarial Robustness

Adversarial Robustness evaluates the vulnerabilities of machine learning models under various types of adversarial attacks.

Papers

Showing 201250 of 1746 papers

TitleStatusHype
Joint rotational invariance and adversarial training of a dual-stream Transformer yields state of the art Brain-Score for Area V4Code1
Large Language Models to Identify Social Determinants of Health in Electronic Health RecordsCode1
Adversarial Robustness via Random Projection FiltersCode1
Adversarial Robustness as a Prior for Learned RepresentationsCode1
BadPart: Unified Black-box Adversarial Patch Attacks against Pixel-wise Regression TasksCode1
MENLI: Robust Evaluation Metrics from Natural Language InferenceCode1
Model-Based Robust Deep Learning: Generalizing to Natural, Out-of-Distribution DataCode1
Multi-Objective Population Based TrainingCode1
Multitask Learning Strengthens Adversarial RobustnessCode1
NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial PerturbationsCode1
Adversarial Attacks on Graph Classifiers via Bayesian OptimisationCode1
OET: Optimization-based prompt injection Evaluation ToolkitCode1
Adversarial Robustness Against the Union of Multiple Threat ModelsCode1
On Evaluating Adversarial RobustnessCode1
Adversarial Training of Self-supervised Monocular Depth Estimation against Physical-World AttacksCode1
AdvDrop: Adversarial Attack to DNNs by Dropping InformationCode1
Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative AttacksCode1
Adversarial Vertex Mixup: Toward Better Adversarially Robust GeneralizationCode1
Adversarial Visual Robustness by Causal InterventionCode1
Adversarial vulnerability of powerful near out-of-distribution detectionCode1
Bag of Tricks for Adversarial TrainingCode1
Adversarial Machine Learning: Bayesian PerspectivesCode1
GenoArmory: A Unified Evaluation Framework for Adversarial Attacks on Genomic Foundation ModelsCode1
Attacks Which Do Not Kill Training Make Adversarial Learning StrongerCode1
OODRobustBench: a Benchmark and Large-Scale Analysis of Adversarial Robustness under Distribution ShiftCode1
AdvRush: Searching for Adversarially Robust Neural ArchitecturesCode1
Part-Based Models Improve Adversarial RobustnessCode1
PartImageNet++ Dataset: Scaling up Part-based Models for Robust RecognitionCode1
PeerAiD: Improving Adversarial Distillation from a Specialized Peer TutorCode1
Perceptual Adversarial Robustness: Defense Against Unseen Threat ModelsCode1
Broken Neural Scaling LawsCode1
Evaluating and Improving Adversarial Robustness of Machine Learning-Based Network Intrusion DetectorsCode1
Adversarial Prompt Tuning for Vision-Language ModelsCode1
AGKD-BML: Defense Against Adversarial Attack by Attention Guided Knowledge Distillation and Bi-directional Metric LearningCode1
Composite Adversarial AttacksCode1
Adversarial Attack and Defense Strategies for Deep Speaker Recognition SystemsCode1
Adversarial Reasoning at Jailbreaking TimeCode1
Pruning Adversarially Robust Neural Networks without Adversarial ExamplesCode1
Adversarial Attack on Deep Learning-Based Splice LocalizationCode1
Enhancing Adversarial Robustness via Test-time Transformation EnsemblingCode1
Improving Adversarial Robustness via Mutual Information EstimationCode1
Random Smoothing Might be Unable to Certify _ Robustness for High-Dimensional ImagesCode1
A Light Recipe to Train Robust Vision TransformersCode1
Adversarial Robustification via Text-to-Image Diffusion ModelsCode1
Reliable Adversarial Distillation with Unreliable TeachersCode1
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacksCode1
Revisiting and Exploring Efficient Fast Adversarial Training via LAW: Lipschitz Regularization and Auto Weight AveragingCode1
An Adaptive Orthogonal Convolution Scheme for Efficient and Flexible CNN ArchitecturesCode1
Adversarial Image Color Transformations in Explicit Color Filter SpaceCode1
Robust Deep Reinforcement Learning through Bootstrapped Opportunistic CurriculumCode1
Show:102550
← PrevPage 5 of 35Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DeBERTa (single model)Accuracy0.61Unverified
2ALBERT (single model)Accuracy0.59Unverified
3T5 (single model)Accuracy0.57Unverified
4SMART_RoBERTa (single model)Accuracy0.54Unverified
5FreeLB (single model)Accuracy0.5Unverified
6RoBERTa (single model)Accuracy0.5Unverified
7InfoBERT (single model)Accuracy0.46Unverified
8ELECTRA (single model)Accuracy0.42Unverified
9BERT (single model)Accuracy0.34Unverified
10SMART_BERT (single model)Accuracy0.3Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed classifierAccuracy95.23Unverified
2Stochastic-LWTA/PGD/WideResNet-34-10Accuracy92.26Unverified
3Stochastic-LWTA/PGD/WideResNet-34-5Accuracy91.88Unverified
4GLOT-DRAccuracy84.13Unverified
5TRADES-ANCRA/ResNet18Accuracy81.7Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (SGD, Cosine)Accuracy77.4Unverified
2ResNet-50 (SGD, Step)Accuracy76.9Unverified
3DeiT-S (AdamW, Cosine)Accuracy76.8Unverified
4ResNet-50 (AdamW, Cosine)Accuracy76.4Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy12.2Unverified
2ResNet-50 (SGD, Cosine)Accuracy3.3Unverified
3ResNet-50 (SGD, Step)Accuracy3.2Unverified
4ResNet-50 (AdamW, Cosine)Accuracy3.1Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet-50 (AdamW, Cosine)mean Corruption Error (mCE)59.3Unverified
2ResNet-50 (SGD, Step)mean Corruption Error (mCE)57.9Unverified
3ResNet-50 (SGD, Cosine)mean Corruption Error (mCE)56.9Unverified
4DeiT-S (AdamW, Cosine)mean Corruption Error (mCE)48Unverified
#ModelMetricClaimedVerifiedStatus
1DeiT-S (AdamW, Cosine)Accuracy13Unverified
2ResNet-50 (SGD, Cosine)Accuracy8.4Unverified
3ResNet-50 (SGD, Step)Accuracy8.3Unverified
4ResNet-50 (AdamW, Cosine)Accuracy8.1Unverified
#ModelMetricClaimedVerifiedStatus
1Mixed ClassifierClean Accuracy85.21Unverified
2ResNet18/MART-ANCRAClean Accuracy60.1Unverified