SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 501550 of 1808 papers

TitleStatusHype
Class-RAG: Real-Time Content Moderation with Retrieval Augmented Generation0
Information Importance-Aware Defense against Adversarial Attack for Automatic Modulation Classification:An XAI-Based Approach0
Efficient and Effective Universal Adversarial Attack against Vision-Language Pre-training Models0
A Survey on Physical Adversarial Attacks against Face Recognition Systems0
Understanding Model Ensemble in Transferable Adversarial Attack0
Graded Suspiciousness of Adversarial Texts to Human0
SCA: Improve Semantic Consistent in Unrestricted Adversarial Attacks via DDPM InversionCode0
Signal Adversarial Examples Generation for Signal Detection Network via White-Box Attack0
Cross-Modality Attack Boosted by Gradient-Evolutionary Multiform Optimization0
Faithfulness and the Notion of Adversarial Sensitivity in NLP Explanations0
SWE2: SubWord Enriched and Significant Word Emphasized Framework for Hate Speech Detection0
Adversarial Attacks on Parts of Speech: An Empirical Study in Text-to-Image GenerationCode0
Cloud Adversarial Example Generation for Remote Sensing Image Classification0
Hidden Activations Are Not Enough: A General Approach to Neural Network PredictionsCode0
ITPatch: An Invisible and Triggered Physical Adversarial Patch against Traffic Sign Recognition0
Deep generative models as an adversarial attack strategy for tabular machine learningCode0
TEAM: Temporal Adversarial Examples Attack Model against Network Intrusion Detection System Applied to RNN0
Golden Ratio Search: A Low-Power Adversarial Attack for Deep Learning based Modulation Classification0
Revisiting Physical-World Adversarial Attack on Traffic Sign Recognition: A Commercial Systems Perspective0
XSub: Explanation-Driven Adversarial Attack against Blackbox Classifiers via Feature Substitution0
Detecting and Defending Against Adversarial Attacks on Automatic Speech Recognition via Diffusion ModelsCode0
High-Frequency Anti-DreamBooth: Robust Defense against Personalized Image SynthesisCode0
D-CAPTCHA++: A Study of Resilience of Deepfake CAPTCHA under Transferable Imperceptible Adversarial Attack0
Unrevealed Threats: A Comprehensive Study of the Adversarial Robustness of Underwater Image Enhancement Models0
Unlearning or Concealment? A Critical Analysis and Evaluation Metrics for Unlearning in Diffusion Models0
Adversarial Attacks on Data AttributionCode0
A practical approach to evaluating the adversarial distance for machine learning classifiersCode0
OpenFact at CheckThat! 2024: Combining Multiple Attack Methods for Effective Adversarial Text Generation0
One-Index Vector Quantization Based Adversarial Attack on Image Classification0
Network transferability of adversarial patches in real-time object detectionCode0
Adversarial Manhole: Challenging Monocular Depth Estimation and Semantic Segmentation Models with Patch AttackCode0
TF-Attack: Transferable and Fast Adversarial Attacks on Large Language Models0
2D-Malafide: Adversarial Attacks Against Face Deepfake Detection SystemsCode0
Probing the Robustness of Vision-Language Pretrained Models: A Multimodal Adversarial Attack Approach0
BankTweak: Adversarial Attack against Multi-Object Trackers by Manipulating Feature Banks0
Enhancing Transferability of Adversarial Attacks with GE-AdvGAN+: A Comprehensive Framework for Gradient Editing0
Query-Efficient Video Adversarial Attack with Stylized Logo0
Leveraging Information Consistency in Frequency and Spatial Domain for Adversarial AttacksCode0
Correlation Analysis of Adversarial Attack in Time Series Classification0
Adversarial Attack for Explanation Robustness of Rationalization Models0
MsMemoryGAN: A Multi-scale Memory GAN for Palm-vein Adversarial Purification0
GAIM: Attacking Graph Neural Networks via Adversarial Influence Maximization0
DFT-Based Adversarial Attack Detection in MRI Brain Imaging: Enhancing Diagnostic Accuracy in Alzheimer's Case Studies0
Evaluating the Validity of Word-level Adversarial Attacks with Large Language ModelsCode0
A Multi-task Adversarial Attack Against Face AuthenticationCode0
Robust Active Learning (RoAL): Countering Dynamic Adversaries in Active Learning with Elastic Weight Consolidation0
Enhancing Adversarial Attacks via Parameter Adaptive Adversarial Attack0
ReToMe-VA: Recursive Token Merging for Video Diffusion-based Unrestricted Adversarial Attack0
Improving Network Interpretability via Explanation Consistency Evaluation0
Simple Perturbations Subvert Ethereum Phishing Transactions Detection: An Empirical Analysis0
Show:102550
← PrevPage 11 of 37Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified