SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 451500 of 1808 papers

TitleStatusHype
Attack Transferability Characterization for Adversarially Robust Multi-label ClassificationCode0
Logit Margin Matters: Improving Transferable Targeted Adversarial Attack by Logit CalibrationCode0
Look Closer to Your Enemy: Learning to Attack via Teacher-Student MimickingCode0
LookHere: Vision Transformers with Directed Attention Generalize and ExtrapolateCode0
Generating Natural Adversarial ExamplesCode0
From Flexibility to Manipulation: The Slippery Slope of XAI EvaluationCode0
Adaptive Image Transformations for Transfer-based Adversarial AttackCode0
Rob-GAN: Generator, Discriminator, and Adversarial AttackerCode0
GenAttack: Practical Black-box Attacks with Gradient-Free OptimizationCode0
Enhancing Adversarial Attacks: The Similar Target MethodCode0
Generating Natural Language Adversarial Examples through Probability Weighted Word SaliencyCode0
Adversarial Laser Spot: Robust and Covert Physical-World Attack to DNNsCode0
Attack-agnostic Adversarial Detection on Medical Data Using Explainable Machine LearningCode0
Forging and Removing Latent-Noise Diffusion Watermarks Using a Single ImageCode0
From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation FrameworkCode0
Generating Textual Adversaries with Minimal PerturbationCode0
Foiling Explanations in Deep Neural NetworksCode0
A Theoretical View of Linear Backpropagation and Its ConvergenceCode0
FMM-Attack: A Flow-based Multi-modal Adversarial Attack on Video-based LLMsCode0
FenceBox: A Platform for Defeating Adversarial Examples with Data Augmentation TechniquesCode0
A Targeted Universal Attack on Graph Convolutional NetworkCode0
FireBERT: Hardening BERT-based classifiers against adversarial attackCode0
Feature Space Perturbations Yield More Transferable Adversarial ExamplesCode0
Federated Zeroth-Order Optimization using Trajectory-Informed Surrogate GradientsCode0
Transferability Bound Theory: Exploring Relationship between Adversarial Transferability and FlatnessCode0
Hidden Activations Are Not Enough: A General Approach to Neural Network PredictionsCode0
Fast Inference of Removal-Based Node InfluenceCode0
FDA: Feature Disruptive AttackCode0
Adversarial Images for Variational AutoencodersCode0
Fast Adversarial CNN-based Perturbation Attack of No-Reference Image Quality MetricsCode0
Bridging the Performance Gap between FGSM and PGD Adversarial TrainingCode0
AdvGPS: Adversarial GPS for Multi-Agent Perception AttackCode0
AdvHat: Real-world adversarial attack on ArcFace Face ID systemCode0
Extending Adversarial Attacks to Produce Adversarial Class Probability DistributionsCode0
NMT-Obfuscator Attack: Ignore a sentence in translation with only one wordCode0
Noise-based cyberattacks generating fake P300 waves in brain–computer interfacesCode0
Adversarial Attack for RGB-Event based Visual Object TrackingCode0
Fashion-Guided Adversarial Attack on Person SegmentationCode0
Generating Unrestricted 3D Adversarial Point CloudsCode0
Explainable Graph Neural Networks Under FireCode0
Artwork Protection Against Neural Style Transfer Using Locally Adaptive Adversarial Color AttackCode0
Adversarial attacks on neural networks through canonical Riemannian foliationsCode0
Explaining Adversarial Robustness of Neural Networks from Clustering Effect PerspectiveCode0
Expanding Scope: Adapting English Adversarial Attacks to ChineseCode0
Explainable and Safe Reinforcement Learning for Autonomous Air MobilityCode0
Army of Thieves: Enhancing Black-Box Model Extraction via Ensemble based sample selectionCode0
Exacerbating Algorithmic Bias through Fairness AttacksCode0
Exact Adversarial Attack to Image Captioning via Structured Output Learning with Latent VariablesCode0
Are Your Explanations Reliable? Investigating the Stability of LIME in Explaining Text Classifiers by Marrying XAI and Adversarial AttackCode0
Excess Capacity and Backdoor PoisoningCode0
Show:102550
← PrevPage 10 of 37Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified