SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 16761700 of 1808 papers

TitleStatusHype
Snowball Adversarial Attack on Traffic Sign Classification0
Detecting Adversarial Directions in Deep Reinforcement Learning to Make Robust Decisions0
Universal Attacks on Equivariant Networks0
Solving Non-Convex Non-Differentiable Min-Max Games using Proximal Gradient Method0
Adversarial Attack on Hierarchical Graph Pooling Neural Networks0
Detecting and Segmenting Adversarial Graphics Patterns from Images0
A Sweet Rabbit Hole by DARCY: Using Honeypots to Detect Universal Trigger's Adversarial Attacks0
Sparse Adversarial Attack in Multi-agent Reinforcement Learning0
Adversarial Attack on Facial Recognition using Visible Light0
DeTrigger: A Gradient-Centric Approach to Backdoor Attack Mitigation in Federated Learning0
Device-aware Optical Adversarial Attack for a Portable Projector-camera System0
DFT-Based Adversarial Attack Detection in MRI Brain Imaging: Enhancing Diagnostic Accuracy in Alzheimer's Case Studies0
A Relaxed Optimization Approach for Adversarial Attacks against Neural Machine Translation Models0
Architecture Selection via the Trade-off Between Accuracy and Robustness0
Adversarial Attack on Deep Product Quantization Network for Image Retrieval0
A Prompting-based Approach for Adversarial Example Generation and Robustness Enhancement0
A Practical and Stealthy Adversarial Attack for Cyber-Physical Applications0
Differentially Private Reward Estimation with Preference Feedback0
Differential Privacy in Personalized Pricing with Nonparametric Demand Models0
Adversarial Attack on Deep Cross-Modal Hamming Retrieval0
Diffusion Attack: Leveraging Stable Diffusion for Naturalistic Image Attacking0
A Practical Adversarial Attack on Contingency Detection of Smart Energy Systems0
DIMBA: Discretely Masked Black-Box Attack in Single Object Tracking0
Universal Distributional Decision-based Black-box Adversarial Attack with Reinforcement Learning0
Applying Tensor Decomposition to image for Robustness against Adversarial Attack0
Show:102550
← PrevPage 68 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified