SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 551600 of 1808 papers

TitleStatusHype
Autonomous LLM-Enhanced Adversarial Attack for Text-to-Motion0
Securing the Diagnosis of Medical Imaging: An In-depth Analysis of AI-Resistant Attacks0
OTAD: An Optimal Transport-Induced Robust Model for Agnostic Adversarial Attack0
Vulnerabilities in AI-generated Image Detection: The Challenge of Adversarial Attacks0
Physical Adversarial Attack on Monocular Depth Estimation via Shape-Varying Patches0
Beyond Dropout: Robust Convolutional Neural Networks Based on Local Feature Masking0
Cross-Task Attack: A Self-Supervision Generative Framework Based on Attention Shift0
Compressed models are NOT miniature versions of large models0
Any Target Can be Offense: Adversarial Example Generation via Generalized Latent InfectionCode0
AEMIM: Adversarial Examples Meet Masked Image Modeling0
Enhancing TinyML Security: Study of Adversarial Attack Transferability0
Investigating Imperceptibility of Adversarial Attacks on Tabular Data: An Empirical AnalysisCode0
Wicked Oddities: Selectively Poisoning for Effective Clean-Label Backdoor Attacks0
Transferable 3D Adversarial Shape Completion using Diffusion ModelsCode0
SemiAdv: Query-Efficient Black-Box Adversarial Attack with Unlabeled Images0
Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition SystemsCode0
DLOVE: A new Security Evaluation Tool for Deep Learning Based Watermarking Techniques0
Rethinking Targeted Adversarial Attacks For Neural Machine TranslationCode0
Self-Supervised Representation Learning for Adversarial Attack Detection0
TrackPGD: Efficient Adversarial Attack using Object Binary Masks against Robust Transformer TrackersCode0
JailbreakHunter: A Visual Analytics Approach for Jailbreak Prompts Discovery from Large-Scale Human-LLM Conversational Datasets0
L_p-norm Distortion-Efficient Adversarial Attack0
Looking From the Future: Multi-order Iterations Can Enhance Adversarial Attack Transferability0
EvolBA: Evolutionary Boundary Attack under Hard-label Black Box condition0
Query-Efficient Hard-Label Black-Box Attack against Vision Transformers0
Emotion Loss Attacking: Adversarial Attack Perception for Skeleton based on Multi-dimensional Features0
IDT: Dual-Task Adversarial Attacks for Privacy Protection0
Deceptive Diffusion: Generating Synthetic Adversarial Examples0
CuDA2: An approach for Incorporating Traitor Agents into Cooperative Multi-Agent Systems0
UNICAD: A Unified Approach for Attack Detection, Noise Reduction and Novel Class Identification0
The Effect of Similarity Measures on Accurate Stability Estimates for Local Surrogate Models in Text-based Explainable AI0
GraphMU: Repairing Robustness of Graph Neural Networks via Machine Unlearning0
AGSOA:Graph Neural Network Targeted Attack Based on Average Gradient and Structure Optimization0
Saliency Attention and Semantic Similarity-Driven Adversarial Perturbation0
Let the Noise Speak: Harnessing Noise for a Unified Defense Against Adversarial and Backdoor AttacksCode0
Imperceptible Face Forgery Attack via Adversarial Semantic MaskCode0
KGPA: Robustness Evaluation for Large Language Models via Cross-Domain Knowledge GraphsCode0
Explainable Graph Neural Networks Under FireCode0
DMS: Addressing Information Loss with More Steps for Pragmatic Adversarial Attacks0
SelfDefend: LLMs Can Defend Themselves against Jailbreaking in a Practical Manner0
VQUNet: Vector Quantization U-Net for Defending Adversarial Atacks by Regularizing Unwanted Noise0
Graph Neural Network Explanations are FragileCode0
SVASTIN: Sparse Video Adversarial Attack via Spatio-Temporal Invertible Neural NetworksCode0
Efficient Black-box Adversarial Attacks via Bayesian Optimization Guided by a Function PriorCode0
Wavelet-Based Image Tokenizer for Vision Transformers0
Uncertainty Measurement of Deep Learning System based on the Convex Hull of Training Sets0
Breaking the False Sense of Security in Backdoor Defense through Re-Activation Attack0
Rethinking Independent Cross-Entropy Loss For Graph-Structured DataCode0
Adversarial Attacks on Hidden Tasks in Multi-Task Learning0
AdjointDEIS: Efficient Gradients for Diffusion ModelsCode0
Show:102550
← PrevPage 12 of 37Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified