SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 16011650 of 1808 papers

TitleStatusHype
Residue-Based Natural Language Adversarial Attack DetectionCode0
Resilience of Named Entity Recognition Models under Adversarial AttackCode0
KGPA: Robustness Evaluation for Large Language Models via Cross-Domain Knowledge GraphsCode0
KNOW How to Make Up Your Mind! Adversarially Detecting and Alleviating Inconsistencies in Natural Language ExplanationsCode0
Knowledge Distillation with Adversarial Samples Supporting Decision BoundaryCode0
Adversarial and Clean Data Are Not TwinsCode0
Adversarial Training for Physics-Informed Neural NetworksCode0
Accelerated Stochastic Gradient-free and Projection-free MethodsCode0
Resisting Deep Learning Models Against Adversarial Attack Transferability via Feature RandomizationCode0
XSS Adversarial Attacks Based on Deep Reinforcement Learning: A Replication and Extension StudyCode0
DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking Neural NetworksCode0
Disttack: Graph Adversarial Attacks Toward Distributed GNN TrainingCode0
Adversarial Self-Defense for Cycle-Consistent GANsCode0
Who is Real Bob? Adversarial Attacks on Speaker Recognition SystemsCode0
Adversarial Self-Attack Defense and Spatial-Temporal Relation Mining for Visible-Infrared Video Person Re-IdentificationCode0
TextHacker: Learning based Hybrid Local Search Algorithm for Text Hard-label Adversarial AttackCode0
Learning Black-Box Attackers with Transferable Priors and Query FeedbackCode0
Advancing Adversarial Robustness in GNeRFs: The IL2-NeRF AttackCode0
BitAbuse: A Dataset of Visually Perturbed Texts for Defending Phishing AttacksCode0
Deep k-NN Defense against Clean-label Data Poisoning AttacksCode0
Task-generalizable Adversarial Attack based on Perceptual MetricCode0
Learning to Accelerate Approximate Methods for Solving Integer Programming via Early FixingCode0
Beyond Worst-case Attacks: Robust RL with Adaptive Defense via Non-dominated PoliciesCode0
Rethinking Independent Cross-Entropy Loss For Graph-Structured DataCode0
Towards Query-Efficient Black-Box Adversary with Zeroth-Order Natural Gradient DescentCode0
Rethinking Targeted Adversarial Attacks For Neural Machine TranslationCode0
Learning to Learn by Zeroth-Order OracleCode0
Learning to Learn Transferable AttackCode0
Learning Transferable 3D Adversarial Cloaks for Deep Trained DetectorsCode0
Learning Transferable Adversarial Examples via Ghost NetworksCode0
Learning Visually-Grounded Semantics from Contrastive Adversarial SamplesCode0
Learn To Pay AttentionCode0
Structured Adversarial Attack: Towards General Implementation and Better InterpretabilityCode0
Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition SystemsCode0
Adversarial sample generation and training using geometric masks for accurate and resilient license plate character recognitionCode0
Leveraging Information Consistency in Frequency and Spatial Domain for Adversarial AttacksCode0
Adversarial Robustness for Visual Grounding of Multimodal Large Language ModelsCode0
REGroup: Rank-aggregating Ensemble of Generative Classifiers for Robust PredictionsCode0
LiDAttack: Robust Black-box Attack on LiDAR-based Object DetectionCode0
Beyond Model Interpretability: On the Faithfulness and Adversarial Robustness of Contrastive Textual ExplanationsCode0
Light-weight Calibrator: a Separable Component for Unsupervised Domain AdaptationCode0
LimeAttack: Local Explainable Method for Textual Hard-Label Adversarial AttackCode0
Subspace Attack: Exploiting Promising Subspaces for Query-Efficient Black-box AttacksCode0
Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial RobustnessCode0
Adversarial Robustness Analysis of Vision-Language Models in Medical Image SegmentationCode0
Towards Resilient and Secure Smart Grids against PMU Adversarial Attacks: A Deep Learning-Based Robust Data Engineering ApproachCode0
Local Aggressive Adversarial Attacks on 3D Point CloudCode0
Adversarial Purification of Information MaskingCode0
Beyond Hard Samples: Robust and Effective Grammatical Error Correction with Cycle Self-AugmentingCode0
Disrupting Deep Uncertainty Estimation Without Harming AccuracyCode0
Show:102550
← PrevPage 33 of 37Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified