SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 476500 of 1808 papers

TitleStatusHype
Fall Leaf Adversarial Attack on Traffic Sign Classification0
Visual Adversarial Attack on Vision-Language Models for Autonomous Driving0
Privacy Protection in Personalized Diffusion Models via Targeted Cross-Attention Adversarial Attack0
Scaling Laws for Black box Adversarial Attacks0
Improving the Transferability of Adversarial Attacks on Face Recognition with Diverse Parameters Augmentation0
Evaluating the Robustness of the "Ensemble Everything Everywhere" Defense0
NMT-Obfuscator Attack: Ignore a sentence in translation with only one wordCode0
DeTrigger: A Gradient-Centric Approach to Backdoor Attack Mitigation in Federated Learning0
BEARD: Benchmarking the Adversarial Robustness for Dataset DistillationCode0
Robust Optimal Power Flow Against Adversarial Attacks: A Tri-Level Optimization Approach0
Chain Association-based Attacking and Shielding Natural Language Processing Systems0
Neural Fingerprints for Adversarial Attack DetectionCode0
Attention Masks Help Adversarial Attacks to Bypass Safety DetectorsCode0
Seeing is Deceiving: Exploitation of Visual Pathways in Multi-Modal Language Models0
Query-Efficient Adversarial Attack Against Vertical Federated Graph LearningCode0
LiDAttack: Robust Black-box Attack on LiDAR-based Object DetectionCode0
Replace-then-Perturb: Targeted Adversarial Attacks With Visual Reasoning for Vision-Language Models0
Pseudo-Conversation Injection for LLM Goal Hijacking0
Keep on Swimming: Real Attackers Only Need Partial Knowledge of a Multi-Model System0
Automated Trustworthiness Oracle Generation for Machine Learning Text Classifiers0
Evaluating the Robustness of LiDAR Point Cloud Tracking Against Adversarial Attack0
Generative Adversarial Patches for Physical Attacks on Cross-Modal Pedestrian Re-Identification0
Adversarial Attacks on Large Language Models Using Regularized RelaxationCode0
Backdoor in Seconds: Unlocking Vulnerabilities in Large Pre-trained Models via Model Editing0
Toward Robust RALMs: Revealing the Impact of Imperfect Retrieval on Retrieval-Augmented Language ModelsCode0
Show:102550
← PrevPage 20 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified