SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 16511675 of 1808 papers

TitleStatusHype
Revisiting DeepFool: generalization and improvementCode0
Adversarial Attack via Dual-Stage Network ErosionCode0
Logit Margin Matters: Improving Transferable Targeted Adversarial Attack by Logit CalibrationCode0
Logits are predictive of network typeCode0
Look Closer to Your Enemy: Learning to Attack via Teacher-Student MimickingCode0
LookHere: Vision Transformers with Directed Attention Generalize and ExtrapolateCode0
AdjointDEIS: Efficient Gradients for Diffusion ModelsCode0
LP-BFGS attack: An adversarial attack based on the Hessian with limited pixelsCode0
RFLA: A Stealthy Reflected Light Adversarial Attack in the Physical WorldCode0
Adversarial Attack Generation Empowered by Min-Max OptimizationCode0
Adversarial Attacks on Spiking Convolutional Neural Networks for Event-based VisionCode0
Susceptibility of Adversarial Attack on Medical Image Segmentation ModelsCode0
RoBIC: A benchmark suite for assessing classifiers robustnessCode0
Disrupting Adversarial Transferability in Deep Neural NetworksCode0
A New Perspective on Stabilizing GANs training: Direct Adversarial TrainingCode0
Malafide: a novel adversarial convolutive noise attack against deepfake and spoofing detection systemsCode0
Unfooling Perturbation-Based Post Hoc ExplainersCode0
Demonstration of an Adversarial Attack Against a Multimodal Vision Language Model for Pathology ImagingCode0
SVASTIN: Sparse Video Adversarial Attack via Spatio-Temporal Invertible Neural NetworksCode0
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute ModelsCode0
Different Spectral Representations in Optimized Artificial Neural Networks and BrainsCode0
Switching Transferable Gradient Directions for Query-Efficient Black-Box Adversarial AttacksCode0
Towards Safe Synthetic Image Generation On the Web: A Multimodal Robust NSFW Defense and Million Scale DatasetCode0
Differentiable Adversarial Attacks for Marked Temporal Point ProcessesCode0
MetaAdvDet: Towards Robust Detection of Evolving Adversarial AttacksCode0
Show:102550
← PrevPage 67 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet20Test Accuracy89.9589.95(1)Community Verified
2Xu et al.Attack: PGD2078.68Unverified
33-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
4TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
5AdvTraining [madry2018]Attack: PGD2048.44Unverified
6TRADES [zhang2019b]Attack: PGD2045.9Unverified
7XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified