SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 14011425 of 1808 papers

TitleStatusHype
Deep Learning Defenses Against Adversarial Examples for Dynamic Risk Assessment0
Query-Free Adversarial Transfer via Undertrained Surrogates0
Determining Sequence of Image Processing Technique (IPT) to Detect Adversarial AttacksCode0
Generating Adversarial Examples with an Optimized Quality0
RayS: A Ray Searching Method for Hard-label Adversarial AttackCode1
Adversarial Attacks for Multi-view Deep Models0
Differentiable Language Model Adversarial Attacks on Categorical Sequence ClassifiersCode1
Local Competition and Uncertainty for Adversarial Robustness in Deep Learning0
REGroup: Rank-aggregating Ensemble of Generative Classifiers for Robust PredictionsCode0
OGAN: Disrupting Deepfakes with an Adversarial Attack that Survives Training0
Classifier-independent Lower-Bounds for Adversarial Robustness0
Boosting Black-Box Attack with Partially Transferred Conditional Adversarial DistributionCode1
Adversarial Self-Supervised Contrastive LearningCode1
Targeted Adversarial Perturbations for Monocular Depth PredictionCode1
D-square-B: Deep Distribution Bound for Natural-looking Adversarial Attack0
On the Tightness of Semidefinite Relaxations for Certifying Robustness to Adversarial Examples0
Adversarial Attack Vulnerability of Medical Image Analysis Systems: Unexplored FactorsCode0
Interpolation between Residual and Non-Residual NetworksCode1
Global Robustness Verification Networks0
Pick-Object-Attack: Type-Specific Adversarial Attack for Object DetectionCode1
One-Shot Adversarial Attacks on Visual Tracking With Dual Attention0
Robust Superpixel-Guided Attentional Adversarial Attack0
What Machines See Is Not What They Get: Fooling Scene Text Recognition Models With Adversarial Text Images0
Modeling Biological Immunity to Adversarial Examples0
Benchmarking Adversarial Robustness on Image ClassificationCode1
Show:102550
← PrevPage 57 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified