SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 13261350 of 1808 papers

TitleStatusHype
Heating up decision boundaries: isocapacitory saturation, adversarial scenarios and generalization bounds0
Untargeted, Targeted and Universal Adversarial Attacks and Defenses on Time Series0
Random Transformation of Image Brightness for Adversarial AttackCode0
Exploring Adversarial Fake Images on Face Manifold0
Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks0
Robust Text CAPTCHAs Using Adversarial Examples0
Local Competition and Stochasticity for Adversarial Robustness in Deep Learning0
Towards Robustness of Deep Neural Networks via Regularization0
Consistency-Sensitivity Guided Ensemble Black-Box Adversarial Attacks in Low-Dimensional Spaces0
Adversarial Attack on Deep Cross-Modal Hamming Retrieval0
Learn2Weight: Weights Transfer Defense against Similar-domain Adversarial Attacks0
Black-Box Adversarial Attacks on Graph Neural Networks as An Influence Maximization Problem0
Stabilized Medical Attacks0
Identifying Informative Latent Variables Learned by GIN via Mutual Information0
Practical Order Attack in Deep Ranking0
Meta-Attack: Class-Agnostic and Model-Agnostic Physical Adversarial Attack0
AT-GAN: An Adversarial Generative Model for Non-constrained Adversarial Examples0
Adversarial Example Detection Using Latent Neighborhood Graph0
An Adversarial Attack via Feature Contributive Regions0
Black-box Adversarial Attacks on Monocular Depth Estimation Using Evolutionary Multi-objective Optimization0
Adjust-free adversarial example generation in speech recognition using evolutionary multi-objective optimization under black-box condition0
Blurring Fools the Network -- Adversarial Attacks by Feature Peak Suppression and Gaussian Blurring0
Exploiting Vulnerability of Pooling in Convolutional Neural Networks by Strict Layer-Output Manipulation for Adversarial Attacks0
Variational Quantum Cloning: Improving Practicality for Quantum Cryptanalysis0
A Hierarchical Feature Constraint to Camouflage Medical Adversarial AttacksCode0
Show:102550
← PrevPage 54 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet20Test Accuracy89.9589.95(1)Community Verified
2Xu et al.Attack: PGD2078.68Unverified
33-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
4TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
5AdvTraining [madry2018]Attack: PGD2048.44Unverified
6TRADES [zhang2019b]Attack: PGD2045.9Unverified
7XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified