SOTAVerified

Adversarial Attack

An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.

Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Papers

Showing 851875 of 1808 papers

TitleStatusHype
Generating Valid and Natural Adversarial Examples with Large Language Models0
Generating Watermarked Adversarial Texts0
Image-based Multimodal Models as Intruders: Transferable Multimodal Attacks on Video-based MLLMs0
Generative Adversarial Patches for Physical Attacks on Cross-Modal Pedestrian Re-Identification0
Defending Against Adversarial Examples by Regularized Deep Embedding0
Defending against Adversarial Attack towards Deep Neural Networks via Collaborative Multi-task Training0
Global Robustness Verification Networks0
Golden Ratio Search: A Low-Power Adversarial Attack for Deep Learning based Modulation Classification0
ImF: Implicit Fingerprint for Large Language Models0
Gradient-based adversarial attacks on categorical sequence models via traversing an embedded world0
Gradient-guided Unsupervised Text Style Transfer via Contrastive Learning0
Evaluating the Robustness of the "Ensemble Everything Everywhere" Defense0
Improved Adversarial Training via Learned Optimizer0
Defending Against Adversarial Attack in ECG Classification with Adversarial Distillation Training0
Beyond Dropout: Robust Convolutional Neural Networks Based on Local Feature Masking0
Graphfool: Targeted Label Adversarial Attack on Graph Embedding0
Adversarial Attacks to Machine Learning-Based Smart Healthcare Systems0
GraphMU: Repairing Robustness of Graph Neural Networks via Machine Unlearning0
A Differentiable Language Model Adversarial Attack on Text Classifiers0
Gray-box Adversarial Attack of Deep Reinforcement Learning-based Trading Agents0
Beyond Score Changes: Adversarial Attack on No-Reference Image Quality Assessment from Two Perspectives0
Deep-RBF Networks Revisited: Robust Classification with Rejection0
GreedyPixel: Fine-Grained Black-Box Adversarial Attack Via Greedy Algorithm0
Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons0
DeepPeep: Exploiting Design Ramifications to Decipher the Architecture of Compact DNNs0
Show:102550
← PrevPage 35 of 73Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Xu et al.Attack: PGD2078.68Unverified
23-ensemble of multi-resolution self-ensemblesAttack: AutoAttack78.13Unverified
3TRADES-ANCRA/ResNet18Attack: AutoAttack59.7Unverified
4AdvTraining [madry2018]Attack: PGD2048.44Unverified
5TRADES [zhang2019b]Attack: PGD2045.9Unverified
6XU-NetRobust Accuracy1Unverified
#ModelMetricClaimedVerifiedStatus
13-ensemble of multi-resolution self-ensemblesAttack: AutoAttack51.28Unverified
2multi-resolution self-ensemblesAttack: AutoAttack47.85Unverified