SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 15511575 of 1808 papers

TitleStatusHype
Investigating Resistance of Deep Learning-based IDS against Adversaries using min-max Optimization0
Active Subspace of Neural Networks: Structural Analysis and Universal AttacksCode0
Word-level Textual Adversarial Attacking as Combinatorial OptimizationCode0
Wasserstein Smoothing: Certified Robustness against Wasserstein Adversarial Attacks0
Learning to Learn by Zeroth-Order OracleCode0
Improving Sequence Modeling Ability of Recurrent Neural Networks via SememesCode0
SPARK: Spatial-aware Online Incremental Attack Against Visual TrackingCode0
LanCe: A Comprehensive and Lightweight CNN Defense Methodology against Physical Adversarial Attacks on Embedded Multimedia Applications0
ODE guided Neural Data Augmentation Techniques for Time Series Data and its Benefits on Robustness0
Real-world adversarial attack on MTCNN face detection systemCode0
On Robustness of Neural Ordinary Differential EquationsCode0
Learning deep forest with multi-scale Local Binary Pattern features for face anti-spoofing0
Adversarial Learning of Deepfakes in Accounting0
AdvSPADE: Realistic Unrestricted Attacks for Semantic Segmentation0
Yet another but more efficient black-box adversarial attack: tiling and evolution strategies0
Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural NetworksCode0
Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions0
An Efficient and Margin-Approaching Zero-Confidence Adversarial Attack0
Role of Spatial Context in Adversarial Robustness for Object DetectionCode0
Deep k-NN Defense against Clean-label Data Poisoning AttacksCode0
Universal Adversarial Attack Using Very Few Test Examples0
Learning Key Steps to Attack Deep Reinforcement Learning Agents0
Robust saliency maps with distribution-preserving decoys0
SELF-KNOWLEDGE DISTILLATION ADVERSARIAL ATTACK0
DO-AutoEncoder: Learning and Intervening Bivariate Causal Mechanisms in Images0
Show:102550
← PrevPage 63 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified