SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 14761500 of 1808 papers

TitleStatusHype
Motion-Excited Sampler: Video Adversarial Attack with Sparked PriorCode1
Inline Detection of DGA Domains Using Side Information0
Frequency-Tuned Universal Adversarial Attacks0
Using an ensemble color space model to tackle adversarial examples0
SAD: Saliency-based Defenses Against Adversarial Examples0
Gradient-based adversarial attacks on categorical sequence models via traversing an embedded world0
No Surprises: Training Robust Lung Nodule Detection for Low-Dose CT Scans by Augmenting with Adversarial Attacks0
Search Space of Adversarial Perturbations against Image Filters0
Real-time, Universal, and Robust Adversarial Attacks Against Speaker Recognition Systems0
Double Backpropagation for Training Autoencoders against Adversarial Attack0
Security of Deep Learning based Lane Keeping System under Physical-World Adversarial Attack0
Disrupting Deepfakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation SystemsCode1
Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical StudiesCode2
Applying Tensor Decomposition to image for Robustness against Adversarial Attack0
Adversarial Ranking Attack and DefenseCode1
Adversarial Attack on Deep Product Quantization Network for Image Retrieval0
Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition0
A Bayes-Optimal View on Adversarial Examples0
Towards Query-Efficient Black-Box Adversary with Zeroth-Order Natural Gradient DescentCode0
Robust Stochastic Bandit Algorithms under Probabilistic Unbounded Adversarial Attack0
Undersensitivity in Neural Reading Comprehension0
Stabilizing Differentiable Architecture Search via Perturbation-based RegularizationCode1
Adversarial Data Encryption0
Watch out! Motion is Blurring the Vision of Your Deep Neural NetworksCode1
Renofeation: A Simple Transfer Learning Method for Improved Adversarial RobustnessCode1
Show:102550
← PrevPage 60 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified