SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 17011725 of 1808 papers

TitleStatusHype
Application of Adversarial Examples to Physical ECG Signals0
Physical-World Optical Adversarial Attacks on 3D Face Recognition0
Sparse and Transferable Universal Singular Vectors Attack0
A Perceptual Distortion Reduction Framework: Towards Generating Adversarial Examples with High Perceptual Quality and Attack Success Rate0
OGAN: Disrupting Deepfakes with an Adversarial Attack that Survives Training0
White-Box Target Attack for EEG-Based BCI Regression Problems0
Anti-Adversarially Manipulated Attributions for Weakly Supervised Semantic Segmentation and Object Localization0
Classifier-independent Lower-Bounds for Adversarial Robustness0
Distillation-Enhanced Physical Adversarial Attacks0
Semantically Stealthy Adversarial Attacks against Segmentation Models0
Distributed Estimation over Directed Graphs Resilient to Sensor Spoofing0
A Novel Deep Learning based Model to Defend Network Intrusion Detection System against Adversarial Attacks0
Adversarial Attack Framework on Graph Embedding Models with Limited Knowledge0
DLOVE: A new Security Evaluation Tool for Deep Learning Based Watermarking Techniques0
DMS: Addressing Information Loss with More Steps for Pragmatic Adversarial Attacks0
DO-AutoEncoder: Learning and Intervening Bivariate Causal Mechanisms in Images0
DODEM: DOuble DEfense Mechanism Against Adversarial Attacks Towards Secure Industrial Internet of Things Analytics0
Does Safety Training of LLMs Generalize to Semantically Related Natural Prompts?0
Domain Adaptive Transfer Attack (DATA)-based Segmentation Networks for Building Extraction from Aerial Images0
DoPa: A Comprehensive CNN Detection Methodology against Physical Adversarial Attacks0
Doppelganger Method: Breaking Role Consistency in LLM Agent via Prompt-based Transferable Adversarial Attack0
Double Backpropagation for Training Autoencoders against Adversarial Attack0
DIP-Watermark: A Double Identity Protection Method Based on Robust Adversarial Watermark0
Do we need entire training data for adversarial training?0
DRO-Augment Framework: Robustness by Synergizing Wasserstein Distributionally Robust Optimization and Data Augmentation0
Show:102550
← PrevPage 69 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified