SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 14911500 of 1808 papers

TitleStatusHype
Towards Characterizing Adversarial Defects of Deep Learning Software from the Lens of Uncertainty0
A Black-box Adversarial Attack Strategy with Adjustable Sparsity and Generalizability for Deep Image Classifiers0
Adversarial Attacks and Defenses: An Interpretation Perspective0
Headless Horseman: Adversarial Attacks on Transfer Learning Models0
Dynamic Knowledge Graph-based Dialogue Generation with Improved Adversarial Meta-Learning0
Active Sentence Learning by Adversarial Uncertainty Sampling in Discrete Space0
Extending Adversarial Attacks to Produce Adversarial Class Probability DistributionsCode0
Towards Transferable Adversarial Attack against Deep Face Recognition0
Domain Adaptive Transfer Attack (DATA)-based Segmentation Networks for Building Extraction from Aerial Images0
SimAug: Learning Robust Representations from 3D Simulation for Pedestrian Trajectory Prediction in Unseen CamerasCode0
Show:102550
← PrevPage 150 of 181Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified