SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 11511200 of 1808 papers

TitleStatusHype
UNBUS: Uncertainty-aware Deep Botnet Detection System in Presence of Perturbed Samples0
Uncertainty-Aware SAR ATR: Defending Against Adversarial Attacks via Bayesian Neural Networks0
Uncertainty Measurement of Deep Learning System based on the Convex Hull of Training Sets0
Undersensitivity in Neural Reading Comprehension0
Understanding Model Ensemble in Transferable Adversarial Attack0
Understanding Oversmoothing in GNNs as Consensus in Opinion Dynamics0
Understanding Pose and Appearance Disentanglement in 3D Human Pose Estimation0
UNICAD: A Unified Approach for Attack Detection, Noise Reduction and Novel Class Identification0
Bidirectional Contrastive Split Learning for Visual Question Answering0
Universal Adversarial Attack on Aligned Multimodal LLMs0
Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet0
Universal Adversarial Attack on Deep Learning Based Prognostics0
Universal Adversarial Attack Using Very Few Test Examples0
Universal Adversarial Perturbations and Image Spam Classifiers0
Universal Attacks on Equivariant Networks0
Universal Distributional Decision-based Black-box Adversarial Attack with Reinforcement Learning0
Classifier-independent Lower-Bounds for Adversarial Robustness0
Universal Soldier: Using Universal Adversarial Perturbations for Detecting Backdoor Attacks0
Unlearning or Concealment? A Critical Analysis and Evaluation Metrics for Unlearning in Diffusion Models0
Unleashing the Power of Pre-trained Encoders for Universal Adversarial Attack Detection0
Unraveling Adversarial Examples against Speaker Identification -- Techniques for Attack Detection and Victim Model Classification0
AdvSPADE: Realistic Unrestricted Attacks for Semantic Segmentation0
Unrevealed Threats: A Comprehensive Study of the Adversarial Robustness of Underwater Image Enhancement Models0
Untargeted Adversarial Attack on Knowledge Graph Embeddings0
Untargeted, Targeted and Universal Adversarial Attacks and Defenses on Time Series0
Untargeted White-box Adversarial Attack with Heuristic Defence Methods in Real-time Deep Learning based Network Intrusion Detection System0
Using an ensemble color space model to tackle adversarial examples0
Using Anomaly Feature Vectors for Detecting, Classifying and Warning of Outlier Adversarial Examples0
Using Word Embeddings to Explore the Learned Representations of Convolutional Neural Networks0
Utilizing Adversarial Targeted Attacks to Boost Adversarial Robustness0
Utilizing Multimodal Feature Consistency to Detect Adversarial Examples on Clinical Summaries0
Variational Quantum Cloning: Improving Practicality for Quantum Cryptanalysis0
Variation Enhanced Attacks Against RRAM-based Neuromorphic Computing System0
VGFL-SA: Vertical Graph Federated Learning Structure Attack Based on Contrastive Learning0
Visual Adversarial Attack on Vision-Language Models for Autonomous Driving0
Visual Attack and Defense on Text0
VQUNet: Vector Quantization U-Net for Defending Adversarial Atacks by Regularizing Unwanted Noise0
Vulnerabilities in AI-generated Image Detection: The Challenge of Adversarial Attacks0
Vulnerability Analysis of Transformer-based Optical Character Recognition to Adversarial Attacks0
Vulnerability of Appearance-based Gaze Estimation0
Vulnerability of Deep Learning0
Wasserstein Adversarial Examples on Univariant Time Series Data0
Wasserstein Smoothing: Certified Robustness against Wasserstein Adversarial Attacks0
Watertox: The Art of Simplicity in Universal Attacks A Cross-Model Framework for Robust Adversarial Generation0
Wavelet-Based Image Tokenizer for Vision Transformers0
Wavelets Beat Monkeys at Adversarial Robustness0
Weighted-Sampling Audio Adversarial Example Attack0
Weight Map Layer for Noise and Adversarial Attack Robustness0
What Machines See Is Not What They Get: Fooling Scene Text Recognition Models With Adversarial Text Images0
On the explainable properties of 1-Lipschitz Neural Networks: An Optimal Transport Perspective0
Show:102550
← PrevPage 24 of 37Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified