SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 11261150 of 1808 papers

TitleStatusHype
Towards Characterizing Adversarial Defects of Deep Learning Software from the Lens of Uncertainty0
Near Optimal Adversarial Attacks on Stochastic Bandits and Defenses with Smoothed Responses0
NeRFTAP: Enhancing Transferability of Adversarial Patches on Face Recognition using Neural Radiance Fields0
Adversarial Robustness for Deep Learning-based Wildfire Prediction Models0
ADMM based Distributed State Observer Design under Sparse Sensor Attacks0
Mitigating Deep Learning Vulnerabilities from Adversarial Examples Attack in the Cybersecurity Domain0
Vulnerability of Deep Learning0
Neural Networks Playing Dough: Investigating Deep Cognition With a Gradient-Based Adversarial Attack0
Adversarial Relighting Against Face Recognition0
Adjust-free adversarial example generation in speech recognition using evolutionary multi-objective optimization under black-box condition0
Towards Evaluating the Robustness of Automatic Speech Recognition Systems via Audio Style Transfer0
NODEAttack: Adversarial Attack on the Energy Consumption of Neural ODEs0
Wasserstein Adversarial Examples on Univariant Time Series Data0
Noise-BERT: A Unified Perturbation-Robust Framework with Noise Alignment Pre-training for Noisy Slot Filling Task0
Adversarial Attack for Asynchronous Event-based Data0
NoisyHate: Mining Online Human-Written Perturbations for Realistic Robustness Benchmarking of Content Moderation Models0
A Differentiable Language Model Adversarial Attack on Text Classifiers0
Non-Asymptotic Bounds for Adversarial Excess Risk under Misspecified Models0
Nonconvex Zeroth-Order Stochastic ADMM Methods with Lower Function Query Complexity0
No Query, No Access0
No Surprises: Training Robust Lung Nodule Detection for Low-Dose CT Scans by Augmenting with Adversarial Attacks0
Fooling Network Interpretation in Image Classification0
An alternative proof of the vulnerability of retrieval in high intrinsic dimensionality neighborhood0
Not So Robust After All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks0
Now You See It, Now You Dont: Adversarial Vulnerabilities in Computational Pathology0
Show:102550
← PrevPage 46 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified