SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 10711080 of 1808 papers

TitleStatusHype
MathAttack: Attacking Large Language Models Towards Math Solving Ability0
Maximal Jacobian-based Saliency Map Attack0
AdvGen: Physical Adversarial Attack on Face Presentation Attack Detection Systems0
MedAttacker: Exploring Black-Box Adversarial Attacks on Risk Prediction Models in Healthcare0
MedRDF: A Robust and Retrain-Less Diagnostic Framework for Medical Pretrained Models Against Adversarial Attack0
AdvFilter: Predictive Perturbation-aware Filtering against Adversarial Attack via Multi-domain Learning0
Vulnerability of Appearance-based Gaze Estimation0
Meta-Attack: Class-Agnostic and Model-Agnostic Physical Adversarial Attack0
Adverseness vs. Equilibrium: Exploring Graph Adversarial Resilience through Dynamic Equilibrium0
Metamorphic Adversarial Detection Pipeline for Face Recognition Systems0
Show:102550
← PrevPage 108 of 181Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified