SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 16011610 of 1808 papers

TitleStatusHype
A Survey of Robust Adversarial Training in Pattern Recognition: Fundamental, Theory, and Methodologies0
Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A Contemporary Survey0
A Study on the Efficiency and Generalization of Light Hybrid Retrievers0
CoRPA: Adversarial Image Generation for Chest X-rays Using Concept Vector Perturbations and Generative Models0
CorrAttack: Black-box Adversarial Attack with Structured Search0
Correlation Analysis of Adversarial Attack in Time Series Classification0
Corruption Robust Offline Reinforcement Learning with Human Feedback0
CosalPure: Learning Concept from Group Images for Robust Co-Saliency Detection0
A Study for Universal Adversarial Attacks on Texture Recognition0
Should Adversarial Attacks Use Pixel p-Norm?0
Show:102550
← PrevPage 161 of 181Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified