SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 276300 of 1808 papers

TitleStatusHype
Adversarial Attacks and Defenses on Text-to-Image Diffusion Models: A SurveyCode2
DLOVE: A new Security Evaluation Tool for Deep Learning Based Watermarking Techniques0
Rethinking Targeted Adversarial Attacks For Neural Machine TranslationCode0
Controlling Whisper: Universal Acoustic Adversarial Attacks to Control Speech Foundation ModelsCode1
Self-Supervised Representation Learning for Adversarial Attack Detection0
TrackPGD: Efficient Adversarial Attack using Object Binary Masks against Robust Transformer TrackersCode0
JailbreakHunter: A Visual Analytics Approach for Jailbreak Prompts Discovery from Large-Scale Human-LLM Conversational Datasets0
L_p-norm Distortion-Efficient Adversarial Attack0
Adversarial Magnification to Deceive Deepfake Detection through Super ResolutionCode1
EvolBA: Evolutionary Boundary Attack under Hard-label Black Box condition0
Looking From the Future: Multi-order Iterations Can Enhance Adversarial Attack Transferability0
Query-Efficient Hard-Label Black-Box Attack against Vision Transformers0
Emotion Loss Attacking: Adversarial Attack Perception for Skeleton based on Multi-dimensional Features0
Deceptive Diffusion: Generating Synthetic Adversarial Examples0
IDT: Dual-Task Adversarial Attacks for Privacy Protection0
On Discrete Prompt Optimization for Diffusion ModelsCode2
CuDA2: An approach for Incorporating Traitor Agents into Cooperative Multi-Agent Systems0
UNICAD: A Unified Approach for Attack Detection, Noise Reduction and Novel Class Identification0
The Effect of Similarity Measures on Accurate Stability Estimates for Local Surrogate Models in Text-based Explainable AI0
GraphMU: Repairing Robustness of Graph Neural Networks via Machine Unlearning0
AGSOA:Graph Neural Network Targeted Attack Based on Average Gradient and Structure Optimization0
Saliency Attention and Semantic Similarity-Driven Adversarial Perturbation0
Let the Noise Speak: Harnessing Noise for a Unified Defense Against Adversarial and Backdoor AttacksCode0
Imperceptible Face Forgery Attack via Adversarial Semantic MaskCode0
RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language ModelsCode2
Show:102550
← PrevPage 12 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified