SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 16511700 of 1808 papers

TitleStatusHype
Accelerating Monte Carlo Bayesian Inference via Approximating Predictive Uncertainty over SimplexCode0
Zeroth-Order Stochastic Alternating Direction Method of Multipliers for Nonconvex Nonsmooth Optimization0
Functional Adversarial AttacksCode0
Scaleable input gradient regularization for adversarial robustnessCode0
Thwarting finite difference adversarial attacks with output randomization0
DoPa: A Comprehensive CNN Detection Methodology against Physical Adversarial Attacks0
Taking Care of The Discretization Problem: A Comprehensive Study of the Discretization Problem and A Black-Box Adversarial Attack in Discrete Integer DomainCode0
A critique of the DeepSec Platform for Security Analysis of Deep Learning Models0
Harnessing the Vulnerability of Latent Layers in Adversarially Trained ModelsCode0
Exact Adversarial Attack to Image Captioning via Structured Output Learning with Latent VariablesCode0
Interpreting and Evaluating Neural Network Robustness0
Mitigating Deep Learning Vulnerabilities from Adversarial Examples Attack in the Cybersecurity Domain0
CharBot: A Simple and Effective Method for Evading DGA Classifiers0
Weight Map Layer for Noise and Adversarial Attack Robustness0
NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural NetworksCode0
NATTACK: A STRONG AND UNIVERSAL GAUSSIAN BLACK-BOX ADVERSARIAL ATTACK0
Second-Order Adversarial Attack and Certifiable Robustness0
CAMOU: Learning Physical Vehicle Camouflages to Adversarially Attack Detectors in the WildCode0
POBA-GA: Perturbation Optimized Black-Box Adversarial Attacks via Genetic Algorithm0
Minimizing Perceived Image Quality Loss Through Adversarial Attack Scoping0
blessing in disguise: Designing Robust Turing Test by Employing Algorithm Unrobustness0
Gotta Catch 'Em All: Using Honeypots to Catch Adversarial Attacks on Neural NetworksCode0
Defensive Quantization: When Efficiency Meets Robustness0
AT-GAN: An Adversarial Generator Model for Non-constrained Adversarial Examples0
Cycle-Consistent Adversarial GAN: the integration of adversarial attack and defense0
Black-Box Decision based Adversarial Attack with Symmetric α-stable Distribution0
Towards Analyzing Semantic Robustness of Deep Neural NetworksCode0
HopSkipJumpAttack: A Query-Efficient Decision-Based AttackCode0
Curls & Whey: Boosting Black-Box Adversarial AttacksCode0
Adversarial Attacks against Deep Saliency Models0
Text Processing Like Humans Do: Visually Attacking and Shielding NLP SystemsCode0
Learning to Defense by Learning to Attack0
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacksCode0
The LogBarrier adversarial attack: making effective use of decision boundary informationCode0
Defending against Whitebox Adversarial Attacks via Randomized DiscretizationCode0
A Formalization of Robustness for Deep Neural Networks0
Adversarial Attacks on Deep Neural Networks for Time Series ClassificationCode0
Attribution-driven Causal Analysis for Detection of Adversarial Examples0
Attack Type Agnostic Perceptual Enhancement of Adversarial Images0
Adversarial Out-domain Examples for Generative ModelsCode0
Adversarial Examples on Graph Data: Deep Insights into Attack and DefenseCode0
Adversarial Attack and Defense on Point Sets0
On the Effectiveness of Low Frequency Perturbations0
Robust Decision Trees Against Adversarial ExamplesCode0
advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorchCode0
There are No Bit Parts for Sign Bits in Black-Box Attacks0
Examining Adversarial Learning against Graph-based IoT Malware Detection Systems0
Is AmI (Attacks Meet Interpretability) Robust to Adversarial Examples?Code0
Optimal Attack against Autoregressive Models by Manipulating the Environment0
The Efficacy of SHIELD under Different Threat Models0
Show:102550
← PrevPage 34 of 37Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified